00:00:00.001 Started by upstream project "autotest-per-patch" build number 132295 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.081 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.082 The recommended git tool is: git 00:00:00.082 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.144 Fetching changes from the remote Git repository 00:00:00.147 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.221 Using shallow fetch with depth 1 00:00:00.221 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.221 > git --version # timeout=10 00:00:00.287 > git --version # 'git version 2.39.2' 00:00:00.287 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.333 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.333 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.508 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.517 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.529 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.529 > git config core.sparsecheckout # timeout=10 00:00:07.539 > git read-tree -mu HEAD # timeout=10 00:00:07.555 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.571 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.571 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.642 [Pipeline] Start of Pipeline 00:00:07.653 [Pipeline] library 00:00:07.655 Loading library shm_lib@master 00:00:07.655 Library shm_lib@master is cached. Copying from home. 00:00:07.671 [Pipeline] node 00:00:07.677 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.678 [Pipeline] { 00:00:07.687 [Pipeline] catchError 00:00:07.688 [Pipeline] { 00:00:07.701 [Pipeline] wrap 00:00:07.710 [Pipeline] { 00:00:07.717 [Pipeline] stage 00:00:07.719 [Pipeline] { (Prologue) 00:00:07.904 [Pipeline] sh 00:00:08.190 + logger -p user.info -t JENKINS-CI 00:00:08.207 [Pipeline] echo 00:00:08.208 Node: CYP9 00:00:08.215 [Pipeline] sh 00:00:08.516 [Pipeline] setCustomBuildProperty 00:00:08.527 [Pipeline] echo 00:00:08.529 Cleanup processes 00:00:08.534 [Pipeline] sh 00:00:08.823 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.823 742290 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.838 [Pipeline] sh 00:00:09.126 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.126 ++ grep -v 'sudo pgrep' 00:00:09.126 ++ awk '{print $1}' 00:00:09.126 + sudo kill -9 00:00:09.126 + true 00:00:09.143 [Pipeline] cleanWs 00:00:09.154 [WS-CLEANUP] Deleting project workspace... 00:00:09.154 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.161 [WS-CLEANUP] done 00:00:09.166 [Pipeline] setCustomBuildProperty 00:00:09.182 [Pipeline] sh 00:00:09.472 + sudo git config --global --replace-all safe.directory '*' 00:00:09.541 [Pipeline] httpRequest 00:00:10.179 [Pipeline] echo 00:00:10.181 Sorcerer 10.211.164.101 is alive 00:00:10.192 [Pipeline] retry 00:00:10.195 [Pipeline] { 00:00:10.212 [Pipeline] httpRequest 00:00:10.216 HttpMethod: GET 00:00:10.217 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.217 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.233 Response Code: HTTP/1.1 200 OK 00:00:10.233 Success: Status code 200 is in the accepted range: 200,404 00:00:10.233 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:19.630 [Pipeline] } 00:00:19.647 [Pipeline] // retry 00:00:19.655 [Pipeline] sh 00:00:19.946 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:19.964 [Pipeline] httpRequest 00:00:20.464 [Pipeline] echo 00:00:20.466 Sorcerer 10.211.164.101 is alive 00:00:20.476 [Pipeline] retry 00:00:20.478 [Pipeline] { 00:00:20.492 [Pipeline] httpRequest 00:00:20.497 HttpMethod: GET 00:00:20.497 URL: http://10.211.164.101/packages/spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:00:20.498 Sending request to url: http://10.211.164.101/packages/spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:00:20.515 Response Code: HTTP/1.1 200 OK 00:00:20.516 Success: Status code 200 is in the accepted range: 200,404 00:00:20.516 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:01:40.193 [Pipeline] } 00:01:40.211 [Pipeline] // retry 00:01:40.219 [Pipeline] sh 00:01:40.648 + tar --no-same-owner -xf spdk_dec6d38430cf0927c9d59eb0ba816b99c261d5fc.tar.gz 00:01:43.967 [Pipeline] sh 00:01:44.257 + git -C spdk log --oneline -n5 00:01:44.257 dec6d3843 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:01:44.257 4b2d483c6 dif: Add spdk_dif_pi_format_get_pi_size() to use for NVMe PRACT 00:01:44.257 560a1dde3 bdev/malloc: Support accel sequence when DIF is enabled 00:01:44.257 30279d1cf bdev: Add spdk_bdev_io_has_no_metadata() for bdev modules 00:01:44.257 4bd31eb0a bdev/malloc: Extract internal of verify_pi() for code reuse 00:01:44.270 [Pipeline] } 00:01:44.284 [Pipeline] // stage 00:01:44.292 [Pipeline] stage 00:01:44.294 [Pipeline] { (Prepare) 00:01:44.311 [Pipeline] writeFile 00:01:44.327 [Pipeline] sh 00:01:44.617 + logger -p user.info -t JENKINS-CI 00:01:44.632 [Pipeline] sh 00:01:44.921 + logger -p user.info -t JENKINS-CI 00:01:44.934 [Pipeline] sh 00:01:45.222 + cat autorun-spdk.conf 00:01:45.222 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.222 SPDK_TEST_NVMF=1 00:01:45.222 SPDK_TEST_NVME_CLI=1 00:01:45.222 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.222 SPDK_TEST_NVMF_NICS=e810 00:01:45.222 SPDK_TEST_VFIOUSER=1 00:01:45.222 SPDK_RUN_UBSAN=1 00:01:45.222 NET_TYPE=phy 00:01:45.232 RUN_NIGHTLY=0 00:01:45.237 [Pipeline] readFile 00:01:45.261 [Pipeline] withEnv 00:01:45.263 [Pipeline] { 00:01:45.275 [Pipeline] sh 00:01:45.563 + set -ex 00:01:45.564 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:45.564 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:45.564 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.564 ++ SPDK_TEST_NVMF=1 00:01:45.564 ++ SPDK_TEST_NVME_CLI=1 00:01:45.564 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:45.564 ++ SPDK_TEST_NVMF_NICS=e810 00:01:45.564 ++ SPDK_TEST_VFIOUSER=1 00:01:45.564 ++ SPDK_RUN_UBSAN=1 00:01:45.564 ++ NET_TYPE=phy 00:01:45.564 ++ RUN_NIGHTLY=0 00:01:45.564 + case $SPDK_TEST_NVMF_NICS in 00:01:45.564 + DRIVERS=ice 00:01:45.564 + [[ tcp == \r\d\m\a ]] 00:01:45.564 + [[ -n ice ]] 00:01:45.564 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.564 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:45.564 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:45.564 rmmod: ERROR: Module irdma is not currently loaded 00:01:45.564 rmmod: ERROR: Module i40iw is not currently loaded 00:01:45.564 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:45.564 + true 00:01:45.564 + for D in $DRIVERS 00:01:45.564 + sudo modprobe ice 00:01:45.564 + exit 0 00:01:45.572 [Pipeline] } 00:01:45.586 [Pipeline] // withEnv 00:01:45.591 [Pipeline] } 00:01:45.603 [Pipeline] // stage 00:01:45.611 [Pipeline] catchError 00:01:45.612 [Pipeline] { 00:01:45.621 [Pipeline] timeout 00:01:45.621 Timeout set to expire in 1 hr 0 min 00:01:45.622 [Pipeline] { 00:01:45.632 [Pipeline] stage 00:01:45.633 [Pipeline] { (Tests) 00:01:45.645 [Pipeline] sh 00:01:45.933 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.933 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.933 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.933 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:45.933 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:45.933 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:45.933 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:45.933 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:45.933 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:45.933 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:45.933 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:45.933 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:45.933 + source /etc/os-release 00:01:45.933 ++ NAME='Fedora Linux' 00:01:45.933 ++ VERSION='39 (Cloud Edition)' 00:01:45.933 ++ ID=fedora 00:01:45.933 ++ VERSION_ID=39 00:01:45.933 ++ VERSION_CODENAME= 00:01:45.933 ++ PLATFORM_ID=platform:f39 00:01:45.933 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:45.933 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:45.933 ++ LOGO=fedora-logo-icon 00:01:45.933 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:45.933 ++ HOME_URL=https://fedoraproject.org/ 00:01:45.933 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:45.933 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:45.933 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:45.933 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:45.933 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:45.933 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:45.933 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:45.933 ++ SUPPORT_END=2024-11-12 00:01:45.933 ++ VARIANT='Cloud Edition' 00:01:45.933 ++ VARIANT_ID=cloud 00:01:45.933 + uname -a 00:01:45.933 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:45.933 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:49.236 Hugepages 00:01:49.236 node hugesize free / total 00:01:49.236 node0 1048576kB 0 / 0 00:01:49.236 node0 2048kB 0 / 0 00:01:49.236 node1 1048576kB 0 / 0 00:01:49.236 node1 2048kB 0 / 0 00:01:49.236 00:01:49.236 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.236 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:49.236 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:49.236 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:49.236 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:49.236 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:49.236 + rm -f /tmp/spdk-ld-path 00:01:49.236 + source autorun-spdk.conf 00:01:49.236 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.236 ++ SPDK_TEST_NVMF=1 00:01:49.236 ++ SPDK_TEST_NVME_CLI=1 00:01:49.236 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.236 ++ SPDK_TEST_NVMF_NICS=e810 00:01:49.236 ++ SPDK_TEST_VFIOUSER=1 00:01:49.236 ++ SPDK_RUN_UBSAN=1 00:01:49.236 ++ NET_TYPE=phy 00:01:49.236 ++ RUN_NIGHTLY=0 00:01:49.236 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.236 + [[ -n '' ]] 00:01:49.236 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:49.236 + for M in /var/spdk/build-*-manifest.txt 00:01:49.236 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:49.236 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.236 + for M in /var/spdk/build-*-manifest.txt 00:01:49.236 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.236 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.236 + for M in /var/spdk/build-*-manifest.txt 00:01:49.236 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.236 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:49.236 ++ uname 00:01:49.236 + [[ Linux == \L\i\n\u\x ]] 00:01:49.236 + sudo dmesg -T 00:01:49.236 + sudo dmesg --clear 00:01:49.237 + dmesg_pid=743835 00:01:49.237 + [[ Fedora Linux == FreeBSD ]] 00:01:49.237 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.237 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.237 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.237 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.237 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.237 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.237 + sudo dmesg -Tw 00:01:49.237 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.237 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.237 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.237 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.237 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.237 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.237 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.237 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.237 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.237 11:25:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.237 11:25:14 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:49.237 11:25:14 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:49.237 11:25:14 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:49.237 11:25:14 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:49.499 11:25:14 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.499 11:25:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:49.499 11:25:14 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.499 11:25:14 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.499 11:25:14 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.499 11:25:14 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.499 11:25:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.499 11:25:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.499 11:25:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.499 11:25:14 -- paths/export.sh@5 -- $ export PATH 00:01:49.499 11:25:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.499 11:25:14 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:49.499 11:25:14 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:49.499 11:25:14 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731666314.XXXXXX 00:01:49.499 11:25:14 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731666314.0nR7q4 00:01:49.499 11:25:14 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:49.499 11:25:14 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:49.499 11:25:14 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:49.499 11:25:14 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:49.499 11:25:14 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.499 11:25:14 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:49.499 11:25:14 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:49.499 11:25:14 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.500 11:25:14 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:49.500 11:25:14 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:49.500 11:25:14 -- pm/common@17 -- $ local monitor 00:01:49.500 11:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.500 11:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.500 11:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.500 11:25:14 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.500 11:25:14 -- pm/common@21 -- $ date +%s 00:01:49.500 11:25:14 -- pm/common@21 -- $ date +%s 00:01:49.500 11:25:14 -- pm/common@25 -- $ sleep 1 00:01:49.500 11:25:14 -- pm/common@21 -- $ date +%s 00:01:49.500 11:25:14 -- pm/common@21 -- $ date +%s 00:01:49.500 11:25:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731666314 00:01:49.500 11:25:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731666314 00:01:49.500 11:25:14 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731666314 00:01:49.500 11:25:14 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1731666314 00:01:49.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731666314_collect-cpu-load.pm.log 00:01:49.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731666314_collect-vmstat.pm.log 00:01:49.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731666314_collect-cpu-temp.pm.log 00:01:49.500 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1731666314_collect-bmc-pm.bmc.pm.log 00:01:50.443 11:25:15 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:50.443 11:25:15 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.443 11:25:15 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.443 11:25:15 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:50.443 11:25:15 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.443 Fri Nov 15 10:25:15 AM UTC 2024 00:01:50.443 11:25:15 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.443 v25.01-pre-211-gdec6d3843 00:01:50.443 11:25:15 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:50.443 11:25:15 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.443 11:25:15 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.443 11:25:15 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:50.443 11:25:15 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:50.443 11:25:15 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.443 ************************************ 00:01:50.443 START TEST ubsan 00:01:50.443 ************************************ 00:01:50.443 11:25:15 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:50.443 using ubsan 00:01:50.443 00:01:50.443 real 0m0.001s 00:01:50.443 user 0m0.001s 00:01:50.443 sys 0m0.000s 00:01:50.443 11:25:15 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:50.443 11:25:15 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.443 ************************************ 00:01:50.443 END TEST ubsan 00:01:50.443 ************************************ 00:01:50.704 11:25:15 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.704 11:25:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.704 11:25:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.704 11:25:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.704 11:25:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.704 11:25:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.704 11:25:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.704 11:25:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.704 11:25:15 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:50.704 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:50.704 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:51.276 Using 'verbs' RDMA provider 00:02:07.138 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:19.379 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:19.951 Creating mk/config.mk...done. 00:02:19.951 Creating mk/cc.flags.mk...done. 00:02:19.951 Type 'make' to build. 00:02:19.951 11:25:45 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:19.951 11:25:45 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:19.951 11:25:45 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:19.951 11:25:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.951 ************************************ 00:02:19.951 START TEST make 00:02:19.951 ************************************ 00:02:19.951 11:25:45 make -- common/autotest_common.sh@1127 -- $ make -j144 00:02:20.523 make[1]: Nothing to be done for 'all'. 00:02:21.907 The Meson build system 00:02:21.907 Version: 1.5.0 00:02:21.907 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:21.908 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:21.908 Build type: native build 00:02:21.908 Project name: libvfio-user 00:02:21.908 Project version: 0.0.1 00:02:21.908 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.908 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.908 Host machine cpu family: x86_64 00:02:21.908 Host machine cpu: x86_64 00:02:21.908 Run-time dependency threads found: YES 00:02:21.908 Library dl found: YES 00:02:21.908 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.908 Run-time dependency json-c found: YES 0.17 00:02:21.908 Run-time dependency cmocka found: YES 1.1.7 00:02:21.908 Program pytest-3 found: NO 00:02:21.908 Program flake8 found: NO 00:02:21.908 Program misspell-fixer found: NO 00:02:21.908 Program restructuredtext-lint found: NO 00:02:21.908 Program valgrind found: YES (/usr/bin/valgrind) 00:02:21.908 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:21.908 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:21.908 Compiler for C supports arguments -Wwrite-strings: YES 00:02:21.908 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:21.908 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:21.908 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:21.908 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:21.908 Build targets in project: 8 00:02:21.908 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:21.908 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:21.908 00:02:21.908 libvfio-user 0.0.1 00:02:21.908 00:02:21.908 User defined options 00:02:21.908 buildtype : debug 00:02:21.908 default_library: shared 00:02:21.908 libdir : /usr/local/lib 00:02:21.908 00:02:21.908 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.169 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:22.430 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:22.430 [2/37] Compiling C object samples/null.p/null.c.o 00:02:22.430 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:22.430 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:22.430 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:22.430 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:22.430 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:22.430 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:22.430 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:22.430 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:22.430 [11/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:22.430 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:22.430 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:22.430 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:22.430 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:22.430 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:22.430 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:22.430 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:22.430 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:22.430 [20/37] Compiling C object samples/server.p/server.c.o 00:02:22.430 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:22.430 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:22.430 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:22.430 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:22.430 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:22.430 [26/37] Compiling C object samples/client.p/client.c.o 00:02:22.430 [27/37] Linking target samples/client 00:02:22.430 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:22.690 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:22.690 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:22.690 [31/37] Linking target test/unit_tests 00:02:22.690 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:22.690 [33/37] Linking target samples/server 00:02:22.690 [34/37] Linking target samples/lspci 00:02:22.690 [35/37] Linking target samples/gpio-pci-idio-16 00:02:22.690 [36/37] Linking target samples/null 00:02:22.690 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:22.690 INFO: autodetecting backend as ninja 00:02:22.690 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:22.950 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:23.210 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:23.210 ninja: no work to do. 00:02:29.799 The Meson build system 00:02:29.799 Version: 1.5.0 00:02:29.799 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:29.799 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:29.799 Build type: native build 00:02:29.799 Program cat found: YES (/usr/bin/cat) 00:02:29.799 Project name: DPDK 00:02:29.799 Project version: 24.03.0 00:02:29.799 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:29.799 C linker for the host machine: cc ld.bfd 2.40-14 00:02:29.799 Host machine cpu family: x86_64 00:02:29.799 Host machine cpu: x86_64 00:02:29.799 Message: ## Building in Developer Mode ## 00:02:29.799 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:29.799 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:29.799 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:29.799 Program python3 found: YES (/usr/bin/python3) 00:02:29.799 Program cat found: YES (/usr/bin/cat) 00:02:29.799 Compiler for C supports arguments -march=native: YES 00:02:29.799 Checking for size of "void *" : 8 00:02:29.799 Checking for size of "void *" : 8 (cached) 00:02:29.799 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:29.799 Library m found: YES 00:02:29.799 Library numa found: YES 00:02:29.799 Has header "numaif.h" : YES 00:02:29.799 Library fdt found: NO 00:02:29.799 Library execinfo found: NO 00:02:29.799 Has header "execinfo.h" : YES 00:02:29.799 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:29.799 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:29.799 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:29.799 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:29.799 Run-time dependency openssl found: YES 3.1.1 00:02:29.799 Run-time dependency libpcap found: YES 1.10.4 00:02:29.799 Has header "pcap.h" with dependency libpcap: YES 00:02:29.799 Compiler for C supports arguments -Wcast-qual: YES 00:02:29.799 Compiler for C supports arguments -Wdeprecated: YES 00:02:29.799 Compiler for C supports arguments -Wformat: YES 00:02:29.799 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:29.799 Compiler for C supports arguments -Wformat-security: NO 00:02:29.799 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:29.799 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:29.799 Compiler for C supports arguments -Wnested-externs: YES 00:02:29.799 Compiler for C supports arguments -Wold-style-definition: YES 00:02:29.799 Compiler for C supports arguments -Wpointer-arith: YES 00:02:29.799 Compiler for C supports arguments -Wsign-compare: YES 00:02:29.799 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:29.799 Compiler for C supports arguments -Wundef: YES 00:02:29.799 Compiler for C supports arguments -Wwrite-strings: YES 00:02:29.799 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:29.799 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:29.799 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:29.799 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:29.799 Program objdump found: YES (/usr/bin/objdump) 00:02:29.799 Compiler for C supports arguments -mavx512f: YES 00:02:29.799 Checking if "AVX512 checking" compiles: YES 00:02:29.799 Fetching value of define "__SSE4_2__" : 1 00:02:29.799 Fetching value of define "__AES__" : 1 00:02:29.799 Fetching value of define "__AVX__" : 1 00:02:29.799 Fetching value of define "__AVX2__" : 1 00:02:29.799 Fetching value of define "__AVX512BW__" : 1 00:02:29.799 Fetching value of define "__AVX512CD__" : 1 00:02:29.799 Fetching value of define "__AVX512DQ__" : 1 00:02:29.799 Fetching value of define "__AVX512F__" : 1 00:02:29.799 Fetching value of define "__AVX512VL__" : 1 00:02:29.799 Fetching value of define "__PCLMUL__" : 1 00:02:29.799 Fetching value of define "__RDRND__" : 1 00:02:29.799 Fetching value of define "__RDSEED__" : 1 00:02:29.799 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:29.799 Fetching value of define "__znver1__" : (undefined) 00:02:29.799 Fetching value of define "__znver2__" : (undefined) 00:02:29.799 Fetching value of define "__znver3__" : (undefined) 00:02:29.799 Fetching value of define "__znver4__" : (undefined) 00:02:29.799 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:29.799 Message: lib/log: Defining dependency "log" 00:02:29.799 Message: lib/kvargs: Defining dependency "kvargs" 00:02:29.799 Message: lib/telemetry: Defining dependency "telemetry" 00:02:29.799 Checking for function "getentropy" : NO 00:02:29.799 Message: lib/eal: Defining dependency "eal" 00:02:29.799 Message: lib/ring: Defining dependency "ring" 00:02:29.799 Message: lib/rcu: Defining dependency "rcu" 00:02:29.799 Message: lib/mempool: Defining dependency "mempool" 00:02:29.799 Message: lib/mbuf: Defining dependency "mbuf" 00:02:29.799 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:29.799 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:29.799 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:29.799 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:29.799 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:29.799 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:29.799 Compiler for C supports arguments -mpclmul: YES 00:02:29.799 Compiler for C supports arguments -maes: YES 00:02:29.799 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:29.799 Compiler for C supports arguments -mavx512bw: YES 00:02:29.799 Compiler for C supports arguments -mavx512dq: YES 00:02:29.799 Compiler for C supports arguments -mavx512vl: YES 00:02:29.799 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:29.799 Compiler for C supports arguments -mavx2: YES 00:02:29.799 Compiler for C supports arguments -mavx: YES 00:02:29.799 Message: lib/net: Defining dependency "net" 00:02:29.799 Message: lib/meter: Defining dependency "meter" 00:02:29.799 Message: lib/ethdev: Defining dependency "ethdev" 00:02:29.799 Message: lib/pci: Defining dependency "pci" 00:02:29.799 Message: lib/cmdline: Defining dependency "cmdline" 00:02:29.799 Message: lib/hash: Defining dependency "hash" 00:02:29.799 Message: lib/timer: Defining dependency "timer" 00:02:29.799 Message: lib/compressdev: Defining dependency "compressdev" 00:02:29.799 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:29.799 Message: lib/dmadev: Defining dependency "dmadev" 00:02:29.799 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:29.799 Message: lib/power: Defining dependency "power" 00:02:29.799 Message: lib/reorder: Defining dependency "reorder" 00:02:29.799 Message: lib/security: Defining dependency "security" 00:02:29.799 Has header "linux/userfaultfd.h" : YES 00:02:29.799 Has header "linux/vduse.h" : YES 00:02:29.799 Message: lib/vhost: Defining dependency "vhost" 00:02:29.799 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:29.799 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:29.799 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:29.799 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:29.799 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:29.799 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:29.799 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:29.800 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:29.800 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:29.800 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:29.800 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:29.800 Configuring doxy-api-html.conf using configuration 00:02:29.800 Configuring doxy-api-man.conf using configuration 00:02:29.800 Program mandb found: YES (/usr/bin/mandb) 00:02:29.800 Program sphinx-build found: NO 00:02:29.800 Configuring rte_build_config.h using configuration 00:02:29.800 Message: 00:02:29.800 ================= 00:02:29.800 Applications Enabled 00:02:29.800 ================= 00:02:29.800 00:02:29.800 apps: 00:02:29.800 00:02:29.800 00:02:29.800 Message: 00:02:29.800 ================= 00:02:29.800 Libraries Enabled 00:02:29.800 ================= 00:02:29.800 00:02:29.800 libs: 00:02:29.800 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:29.800 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:29.800 cryptodev, dmadev, power, reorder, security, vhost, 00:02:29.800 00:02:29.800 Message: 00:02:29.800 =============== 00:02:29.800 Drivers Enabled 00:02:29.800 =============== 00:02:29.800 00:02:29.800 common: 00:02:29.800 00:02:29.800 bus: 00:02:29.800 pci, vdev, 00:02:29.800 mempool: 00:02:29.800 ring, 00:02:29.800 dma: 00:02:29.800 00:02:29.800 net: 00:02:29.800 00:02:29.800 crypto: 00:02:29.800 00:02:29.800 compress: 00:02:29.800 00:02:29.800 vdpa: 00:02:29.800 00:02:29.800 00:02:29.800 Message: 00:02:29.800 ================= 00:02:29.800 Content Skipped 00:02:29.800 ================= 00:02:29.800 00:02:29.800 apps: 00:02:29.800 dumpcap: explicitly disabled via build config 00:02:29.800 graph: explicitly disabled via build config 00:02:29.800 pdump: explicitly disabled via build config 00:02:29.800 proc-info: explicitly disabled via build config 00:02:29.800 test-acl: explicitly disabled via build config 00:02:29.800 test-bbdev: explicitly disabled via build config 00:02:29.800 test-cmdline: explicitly disabled via build config 00:02:29.800 test-compress-perf: explicitly disabled via build config 00:02:29.800 test-crypto-perf: explicitly disabled via build config 00:02:29.800 test-dma-perf: explicitly disabled via build config 00:02:29.800 test-eventdev: explicitly disabled via build config 00:02:29.800 test-fib: explicitly disabled via build config 00:02:29.800 test-flow-perf: explicitly disabled via build config 00:02:29.800 test-gpudev: explicitly disabled via build config 00:02:29.800 test-mldev: explicitly disabled via build config 00:02:29.800 test-pipeline: explicitly disabled via build config 00:02:29.800 test-pmd: explicitly disabled via build config 00:02:29.800 test-regex: explicitly disabled via build config 00:02:29.800 test-sad: explicitly disabled via build config 00:02:29.800 test-security-perf: explicitly disabled via build config 00:02:29.800 00:02:29.800 libs: 00:02:29.800 argparse: explicitly disabled via build config 00:02:29.800 metrics: explicitly disabled via build config 00:02:29.800 acl: explicitly disabled via build config 00:02:29.800 bbdev: explicitly disabled via build config 00:02:29.800 bitratestats: explicitly disabled via build config 00:02:29.800 bpf: explicitly disabled via build config 00:02:29.800 cfgfile: explicitly disabled via build config 00:02:29.800 distributor: explicitly disabled via build config 00:02:29.800 efd: explicitly disabled via build config 00:02:29.800 eventdev: explicitly disabled via build config 00:02:29.800 dispatcher: explicitly disabled via build config 00:02:29.800 gpudev: explicitly disabled via build config 00:02:29.800 gro: explicitly disabled via build config 00:02:29.800 gso: explicitly disabled via build config 00:02:29.800 ip_frag: explicitly disabled via build config 00:02:29.800 jobstats: explicitly disabled via build config 00:02:29.800 latencystats: explicitly disabled via build config 00:02:29.800 lpm: explicitly disabled via build config 00:02:29.800 member: explicitly disabled via build config 00:02:29.800 pcapng: explicitly disabled via build config 00:02:29.800 rawdev: explicitly disabled via build config 00:02:29.800 regexdev: explicitly disabled via build config 00:02:29.800 mldev: explicitly disabled via build config 00:02:29.800 rib: explicitly disabled via build config 00:02:29.800 sched: explicitly disabled via build config 00:02:29.800 stack: explicitly disabled via build config 00:02:29.800 ipsec: explicitly disabled via build config 00:02:29.800 pdcp: explicitly disabled via build config 00:02:29.800 fib: explicitly disabled via build config 00:02:29.800 port: explicitly disabled via build config 00:02:29.800 pdump: explicitly disabled via build config 00:02:29.800 table: explicitly disabled via build config 00:02:29.800 pipeline: explicitly disabled via build config 00:02:29.800 graph: explicitly disabled via build config 00:02:29.800 node: explicitly disabled via build config 00:02:29.800 00:02:29.800 drivers: 00:02:29.800 common/cpt: not in enabled drivers build config 00:02:29.800 common/dpaax: not in enabled drivers build config 00:02:29.800 common/iavf: not in enabled drivers build config 00:02:29.800 common/idpf: not in enabled drivers build config 00:02:29.800 common/ionic: not in enabled drivers build config 00:02:29.800 common/mvep: not in enabled drivers build config 00:02:29.800 common/octeontx: not in enabled drivers build config 00:02:29.800 bus/auxiliary: not in enabled drivers build config 00:02:29.800 bus/cdx: not in enabled drivers build config 00:02:29.800 bus/dpaa: not in enabled drivers build config 00:02:29.800 bus/fslmc: not in enabled drivers build config 00:02:29.800 bus/ifpga: not in enabled drivers build config 00:02:29.800 bus/platform: not in enabled drivers build config 00:02:29.800 bus/uacce: not in enabled drivers build config 00:02:29.800 bus/vmbus: not in enabled drivers build config 00:02:29.800 common/cnxk: not in enabled drivers build config 00:02:29.800 common/mlx5: not in enabled drivers build config 00:02:29.800 common/nfp: not in enabled drivers build config 00:02:29.800 common/nitrox: not in enabled drivers build config 00:02:29.800 common/qat: not in enabled drivers build config 00:02:29.800 common/sfc_efx: not in enabled drivers build config 00:02:29.800 mempool/bucket: not in enabled drivers build config 00:02:29.800 mempool/cnxk: not in enabled drivers build config 00:02:29.800 mempool/dpaa: not in enabled drivers build config 00:02:29.800 mempool/dpaa2: not in enabled drivers build config 00:02:29.800 mempool/octeontx: not in enabled drivers build config 00:02:29.800 mempool/stack: not in enabled drivers build config 00:02:29.800 dma/cnxk: not in enabled drivers build config 00:02:29.800 dma/dpaa: not in enabled drivers build config 00:02:29.800 dma/dpaa2: not in enabled drivers build config 00:02:29.800 dma/hisilicon: not in enabled drivers build config 00:02:29.800 dma/idxd: not in enabled drivers build config 00:02:29.800 dma/ioat: not in enabled drivers build config 00:02:29.800 dma/skeleton: not in enabled drivers build config 00:02:29.800 net/af_packet: not in enabled drivers build config 00:02:29.800 net/af_xdp: not in enabled drivers build config 00:02:29.800 net/ark: not in enabled drivers build config 00:02:29.800 net/atlantic: not in enabled drivers build config 00:02:29.800 net/avp: not in enabled drivers build config 00:02:29.800 net/axgbe: not in enabled drivers build config 00:02:29.800 net/bnx2x: not in enabled drivers build config 00:02:29.800 net/bnxt: not in enabled drivers build config 00:02:29.800 net/bonding: not in enabled drivers build config 00:02:29.800 net/cnxk: not in enabled drivers build config 00:02:29.800 net/cpfl: not in enabled drivers build config 00:02:29.800 net/cxgbe: not in enabled drivers build config 00:02:29.800 net/dpaa: not in enabled drivers build config 00:02:29.800 net/dpaa2: not in enabled drivers build config 00:02:29.800 net/e1000: not in enabled drivers build config 00:02:29.800 net/ena: not in enabled drivers build config 00:02:29.800 net/enetc: not in enabled drivers build config 00:02:29.800 net/enetfec: not in enabled drivers build config 00:02:29.800 net/enic: not in enabled drivers build config 00:02:29.800 net/failsafe: not in enabled drivers build config 00:02:29.800 net/fm10k: not in enabled drivers build config 00:02:29.800 net/gve: not in enabled drivers build config 00:02:29.800 net/hinic: not in enabled drivers build config 00:02:29.800 net/hns3: not in enabled drivers build config 00:02:29.800 net/i40e: not in enabled drivers build config 00:02:29.800 net/iavf: not in enabled drivers build config 00:02:29.800 net/ice: not in enabled drivers build config 00:02:29.800 net/idpf: not in enabled drivers build config 00:02:29.800 net/igc: not in enabled drivers build config 00:02:29.800 net/ionic: not in enabled drivers build config 00:02:29.800 net/ipn3ke: not in enabled drivers build config 00:02:29.800 net/ixgbe: not in enabled drivers build config 00:02:29.800 net/mana: not in enabled drivers build config 00:02:29.800 net/memif: not in enabled drivers build config 00:02:29.800 net/mlx4: not in enabled drivers build config 00:02:29.800 net/mlx5: not in enabled drivers build config 00:02:29.800 net/mvneta: not in enabled drivers build config 00:02:29.800 net/mvpp2: not in enabled drivers build config 00:02:29.800 net/netvsc: not in enabled drivers build config 00:02:29.800 net/nfb: not in enabled drivers build config 00:02:29.800 net/nfp: not in enabled drivers build config 00:02:29.800 net/ngbe: not in enabled drivers build config 00:02:29.800 net/null: not in enabled drivers build config 00:02:29.800 net/octeontx: not in enabled drivers build config 00:02:29.800 net/octeon_ep: not in enabled drivers build config 00:02:29.800 net/pcap: not in enabled drivers build config 00:02:29.800 net/pfe: not in enabled drivers build config 00:02:29.800 net/qede: not in enabled drivers build config 00:02:29.800 net/ring: not in enabled drivers build config 00:02:29.800 net/sfc: not in enabled drivers build config 00:02:29.800 net/softnic: not in enabled drivers build config 00:02:29.800 net/tap: not in enabled drivers build config 00:02:29.800 net/thunderx: not in enabled drivers build config 00:02:29.800 net/txgbe: not in enabled drivers build config 00:02:29.800 net/vdev_netvsc: not in enabled drivers build config 00:02:29.800 net/vhost: not in enabled drivers build config 00:02:29.800 net/virtio: not in enabled drivers build config 00:02:29.801 net/vmxnet3: not in enabled drivers build config 00:02:29.801 raw/*: missing internal dependency, "rawdev" 00:02:29.801 crypto/armv8: not in enabled drivers build config 00:02:29.801 crypto/bcmfs: not in enabled drivers build config 00:02:29.801 crypto/caam_jr: not in enabled drivers build config 00:02:29.801 crypto/ccp: not in enabled drivers build config 00:02:29.801 crypto/cnxk: not in enabled drivers build config 00:02:29.801 crypto/dpaa_sec: not in enabled drivers build config 00:02:29.801 crypto/dpaa2_sec: not in enabled drivers build config 00:02:29.801 crypto/ipsec_mb: not in enabled drivers build config 00:02:29.801 crypto/mlx5: not in enabled drivers build config 00:02:29.801 crypto/mvsam: not in enabled drivers build config 00:02:29.801 crypto/nitrox: not in enabled drivers build config 00:02:29.801 crypto/null: not in enabled drivers build config 00:02:29.801 crypto/octeontx: not in enabled drivers build config 00:02:29.801 crypto/openssl: not in enabled drivers build config 00:02:29.801 crypto/scheduler: not in enabled drivers build config 00:02:29.801 crypto/uadk: not in enabled drivers build config 00:02:29.801 crypto/virtio: not in enabled drivers build config 00:02:29.801 compress/isal: not in enabled drivers build config 00:02:29.801 compress/mlx5: not in enabled drivers build config 00:02:29.801 compress/nitrox: not in enabled drivers build config 00:02:29.801 compress/octeontx: not in enabled drivers build config 00:02:29.801 compress/zlib: not in enabled drivers build config 00:02:29.801 regex/*: missing internal dependency, "regexdev" 00:02:29.801 ml/*: missing internal dependency, "mldev" 00:02:29.801 vdpa/ifc: not in enabled drivers build config 00:02:29.801 vdpa/mlx5: not in enabled drivers build config 00:02:29.801 vdpa/nfp: not in enabled drivers build config 00:02:29.801 vdpa/sfc: not in enabled drivers build config 00:02:29.801 event/*: missing internal dependency, "eventdev" 00:02:29.801 baseband/*: missing internal dependency, "bbdev" 00:02:29.801 gpu/*: missing internal dependency, "gpudev" 00:02:29.801 00:02:29.801 00:02:29.801 Build targets in project: 84 00:02:29.801 00:02:29.801 DPDK 24.03.0 00:02:29.801 00:02:29.801 User defined options 00:02:29.801 buildtype : debug 00:02:29.801 default_library : shared 00:02:29.801 libdir : lib 00:02:29.801 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:29.801 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:29.801 c_link_args : 00:02:29.801 cpu_instruction_set: native 00:02:29.801 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:29.801 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:29.801 enable_docs : false 00:02:29.801 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:29.801 enable_kmods : false 00:02:29.801 max_lcores : 128 00:02:29.801 tests : false 00:02:29.801 00:02:29.801 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:29.801 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:29.801 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:29.801 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.801 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:29.801 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:29.801 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.801 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:29.801 [7/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.801 [8/267] Linking static target lib/librte_kvargs.a 00:02:29.801 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.801 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.801 [11/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:29.801 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:29.801 [13/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.801 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:29.801 [15/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:29.801 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:29.801 [17/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.801 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.801 [19/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.801 [20/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.801 [21/267] Linking static target lib/librte_log.a 00:02:29.801 [22/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:29.801 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.801 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.801 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.801 [26/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:30.060 [27/267] Linking static target lib/librte_pci.a 00:02:30.060 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:30.060 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:30.060 [30/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:30.060 [31/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:30.060 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:30.060 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:30.060 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:30.060 [35/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:30.060 [36/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:30.060 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:30.060 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:30.319 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.319 [40/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:30.319 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.319 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.319 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.319 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.319 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:30.319 [46/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.319 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.319 [48/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.319 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:30.319 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.319 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.319 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.319 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:30.319 [54/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.319 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:30.319 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.319 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:30.319 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.319 [59/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:30.319 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.319 [61/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:30.319 [62/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.319 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.319 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.319 [65/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:30.319 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.319 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:30.319 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:30.319 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:30.319 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:30.319 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:30.319 [72/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.319 [73/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.319 [74/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:30.319 [75/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.319 [76/267] Linking static target lib/librte_meter.a 00:02:30.319 [77/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:30.319 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.319 [79/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.319 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:30.319 [81/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:30.319 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:30.319 [83/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.319 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:30.319 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:30.319 [86/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:30.319 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:30.319 [88/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:30.319 [89/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:30.320 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:30.320 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:30.320 [92/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.320 [93/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:30.320 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.320 [95/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.320 [96/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:30.320 [97/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.320 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:30.320 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:30.320 [100/267] Linking static target lib/librte_ring.a 00:02:30.320 [101/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:30.320 [102/267] Linking static target lib/librte_timer.a 00:02:30.320 [103/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:30.320 [104/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:30.320 [105/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.320 [106/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:30.320 [107/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.320 [108/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:30.320 [109/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:30.320 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.320 [111/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:30.320 [112/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:30.320 [113/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.320 [114/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:30.320 [115/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:30.320 [116/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:30.320 [117/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:30.320 [118/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.320 [119/267] Linking static target lib/librte_cmdline.a 00:02:30.320 [120/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:30.320 [121/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:30.320 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:30.320 [123/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.320 [124/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:30.320 [125/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:30.320 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:30.320 [127/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:30.320 [128/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:30.320 [129/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.320 [130/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:30.580 [131/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:30.580 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:30.580 [133/267] Linking static target lib/librte_dmadev.a 00:02:30.580 [134/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:30.580 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:30.580 [136/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:30.580 [137/267] Linking static target lib/librte_power.a 00:02:30.580 [138/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:30.580 [139/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.580 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:30.580 [141/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.580 [142/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.580 [143/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:30.580 [144/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:30.580 [145/267] Linking static target lib/librte_telemetry.a 00:02:30.580 [146/267] Linking static target lib/librte_mempool.a 00:02:30.580 [147/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:30.580 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:30.580 [149/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.580 [150/267] Linking static target lib/librte_reorder.a 00:02:30.580 [151/267] Linking static target lib/librte_rcu.a 00:02:30.580 [152/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.580 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:30.580 [154/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:30.580 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:30.580 [156/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:30.580 [157/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:30.580 [158/267] Linking static target lib/librte_security.a 00:02:30.580 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:30.580 [160/267] Linking target lib/librte_log.so.24.1 00:02:30.580 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:30.580 [162/267] Linking static target lib/librte_net.a 00:02:30.580 [163/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.580 [164/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:30.580 [165/267] Linking static target lib/librte_compressdev.a 00:02:30.580 [166/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:30.580 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.580 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.580 [169/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:30.580 [170/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:30.580 [171/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:30.580 [172/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:30.580 [173/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:30.580 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:30.580 [175/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.580 [176/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.580 [177/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:30.580 [178/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:30.580 [179/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:30.580 [180/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.580 [181/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:30.580 [182/267] Linking static target lib/librte_eal.a 00:02:30.580 [183/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.580 [184/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:30.580 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:30.580 [186/267] Linking static target drivers/librte_bus_vdev.a 00:02:30.580 [187/267] Linking target lib/librte_kvargs.so.24.1 00:02:30.580 [188/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:30.580 [189/267] Linking static target lib/librte_mbuf.a 00:02:30.580 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.840 [191/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:30.840 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:30.840 [193/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.840 [194/267] Linking static target lib/librte_hash.a 00:02:30.840 [195/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:30.840 [196/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.840 [197/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:30.840 [198/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:30.840 [199/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:30.840 [200/267] Linking static target drivers/librte_mempool_ring.a 00:02:30.840 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.840 [202/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:30.840 [203/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.840 [204/267] Linking static target drivers/librte_bus_pci.a 00:02:30.840 [205/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.840 [206/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:30.840 [207/267] Linking static target lib/librte_cryptodev.a 00:02:30.840 [208/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.840 [209/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.100 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:31.100 [211/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.100 [212/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:31.100 [213/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.100 [214/267] Linking target lib/librte_telemetry.so.24.1 00:02:31.100 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.100 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.361 [217/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.361 [218/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.361 [219/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.361 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:31.622 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.622 [222/267] Linking static target lib/librte_ethdev.a 00:02:31.622 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.882 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.882 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.882 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.454 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:32.454 [228/267] Linking static target lib/librte_vhost.a 00:02:33.024 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.941 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.525 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.097 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.358 [233/267] Linking target lib/librte_eal.so.24.1 00:02:42.358 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:42.358 [235/267] Linking target lib/librte_ring.so.24.1 00:02:42.358 [236/267] Linking target lib/librte_timer.so.24.1 00:02:42.358 [237/267] Linking target lib/librte_meter.so.24.1 00:02:42.358 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:42.358 [239/267] Linking target lib/librte_pci.so.24.1 00:02:42.358 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:42.619 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.619 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.619 [243/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.619 [244/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.619 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.619 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:42.619 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:42.619 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.619 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.619 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.881 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.881 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:42.881 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.881 [254/267] Linking target lib/librte_net.so.24.1 00:02:42.881 [255/267] Linking target lib/librte_compressdev.so.24.1 00:02:42.881 [256/267] Linking target lib/librte_reorder.so.24.1 00:02:42.881 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:43.141 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.141 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.141 [260/267] Linking target lib/librte_hash.so.24.1 00:02:43.141 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:43.141 [262/267] Linking target lib/librte_security.so.24.1 00:02:43.141 [263/267] Linking target lib/librte_ethdev.so.24.1 00:02:43.141 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.403 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:43.403 [266/267] Linking target lib/librte_power.so.24.1 00:02:43.403 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:43.403 INFO: autodetecting backend as ninja 00:02:43.403 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:47.613 CC lib/ut_mock/mock.o 00:02:47.613 CC lib/log/log.o 00:02:47.613 CC lib/log/log_flags.o 00:02:47.613 CC lib/log/log_deprecated.o 00:02:47.613 CC lib/ut/ut.o 00:02:47.613 LIB libspdk_log.a 00:02:47.613 LIB libspdk_ut_mock.a 00:02:47.613 LIB libspdk_ut.a 00:02:47.613 SO libspdk_ut_mock.so.6.0 00:02:47.613 SO libspdk_log.so.7.1 00:02:47.613 SO libspdk_ut.so.2.0 00:02:47.613 SYMLINK libspdk_ut_mock.so 00:02:47.613 SYMLINK libspdk_log.so 00:02:47.613 SYMLINK libspdk_ut.so 00:02:47.875 CC lib/dma/dma.o 00:02:47.875 CC lib/util/base64.o 00:02:47.875 CC lib/util/bit_array.o 00:02:47.875 CC lib/util/cpuset.o 00:02:47.875 CC lib/util/crc16.o 00:02:47.875 CC lib/util/crc32.o 00:02:47.875 CC lib/util/crc32c.o 00:02:47.875 CC lib/util/crc32_ieee.o 00:02:47.875 CC lib/util/crc64.o 00:02:47.875 CC lib/util/dif.o 00:02:47.875 CC lib/util/fd.o 00:02:47.875 CC lib/ioat/ioat.o 00:02:47.875 CXX lib/trace_parser/trace.o 00:02:47.875 CC lib/util/fd_group.o 00:02:47.875 CC lib/util/file.o 00:02:47.875 CC lib/util/hexlify.o 00:02:47.875 CC lib/util/iov.o 00:02:47.875 CC lib/util/math.o 00:02:47.875 CC lib/util/net.o 00:02:47.875 CC lib/util/pipe.o 00:02:47.875 CC lib/util/strerror_tls.o 00:02:47.875 CC lib/util/string.o 00:02:47.875 CC lib/util/uuid.o 00:02:47.875 CC lib/util/xor.o 00:02:47.875 CC lib/util/zipf.o 00:02:47.875 CC lib/util/md5.o 00:02:47.875 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.875 CC lib/vfio_user/host/vfio_user.o 00:02:47.875 LIB libspdk_dma.a 00:02:48.137 SO libspdk_dma.so.5.0 00:02:48.137 LIB libspdk_ioat.a 00:02:48.137 SYMLINK libspdk_dma.so 00:02:48.137 SO libspdk_ioat.so.7.0 00:02:48.137 SYMLINK libspdk_ioat.so 00:02:48.137 LIB libspdk_vfio_user.a 00:02:48.137 SO libspdk_vfio_user.so.5.0 00:02:48.403 SYMLINK libspdk_vfio_user.so 00:02:48.403 LIB libspdk_util.a 00:02:48.403 SO libspdk_util.so.10.1 00:02:48.403 SYMLINK libspdk_util.so 00:02:48.664 LIB libspdk_trace_parser.a 00:02:48.664 SO libspdk_trace_parser.so.6.0 00:02:48.664 SYMLINK libspdk_trace_parser.so 00:02:48.926 CC lib/vmd/vmd.o 00:02:48.926 CC lib/vmd/led.o 00:02:48.926 CC lib/conf/conf.o 00:02:48.926 CC lib/rdma_utils/rdma_utils.o 00:02:48.926 CC lib/json/json_parse.o 00:02:48.926 CC lib/idxd/idxd.o 00:02:48.926 CC lib/json/json_util.o 00:02:48.926 CC lib/env_dpdk/env.o 00:02:48.926 CC lib/json/json_write.o 00:02:48.926 CC lib/idxd/idxd_user.o 00:02:48.926 CC lib/env_dpdk/memory.o 00:02:48.926 CC lib/env_dpdk/pci.o 00:02:48.926 CC lib/idxd/idxd_kernel.o 00:02:48.926 CC lib/env_dpdk/init.o 00:02:48.926 CC lib/env_dpdk/threads.o 00:02:48.926 CC lib/env_dpdk/pci_ioat.o 00:02:48.926 CC lib/env_dpdk/pci_virtio.o 00:02:48.926 CC lib/env_dpdk/pci_vmd.o 00:02:48.926 CC lib/env_dpdk/pci_idxd.o 00:02:48.926 CC lib/env_dpdk/pci_event.o 00:02:48.926 CC lib/env_dpdk/sigbus_handler.o 00:02:48.926 CC lib/env_dpdk/pci_dpdk.o 00:02:48.926 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:48.926 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:49.187 LIB libspdk_conf.a 00:02:49.187 LIB libspdk_rdma_utils.a 00:02:49.187 SO libspdk_conf.so.6.0 00:02:49.187 LIB libspdk_json.a 00:02:49.187 SO libspdk_rdma_utils.so.1.0 00:02:49.187 SO libspdk_json.so.6.0 00:02:49.187 SYMLINK libspdk_conf.so 00:02:49.450 SYMLINK libspdk_rdma_utils.so 00:02:49.450 SYMLINK libspdk_json.so 00:02:49.450 LIB libspdk_idxd.a 00:02:49.450 LIB libspdk_vmd.a 00:02:49.450 SO libspdk_idxd.so.12.1 00:02:49.450 SO libspdk_vmd.so.6.0 00:02:49.710 SYMLINK libspdk_idxd.so 00:02:49.710 SYMLINK libspdk_vmd.so 00:02:49.710 CC lib/rdma_provider/common.o 00:02:49.710 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:49.710 CC lib/jsonrpc/jsonrpc_server.o 00:02:49.710 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:49.710 CC lib/jsonrpc/jsonrpc_client.o 00:02:49.710 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:49.971 LIB libspdk_rdma_provider.a 00:02:49.971 SO libspdk_rdma_provider.so.7.0 00:02:49.971 LIB libspdk_jsonrpc.a 00:02:49.971 SYMLINK libspdk_rdma_provider.so 00:02:49.971 SO libspdk_jsonrpc.so.6.0 00:02:49.971 SYMLINK libspdk_jsonrpc.so 00:02:50.232 LIB libspdk_env_dpdk.a 00:02:50.232 SO libspdk_env_dpdk.so.15.1 00:02:50.494 SYMLINK libspdk_env_dpdk.so 00:02:50.494 CC lib/rpc/rpc.o 00:02:50.756 LIB libspdk_rpc.a 00:02:50.756 SO libspdk_rpc.so.6.0 00:02:50.756 SYMLINK libspdk_rpc.so 00:02:51.016 CC lib/trace/trace.o 00:02:51.016 CC lib/trace/trace_flags.o 00:02:51.016 CC lib/trace/trace_rpc.o 00:02:51.016 CC lib/keyring/keyring.o 00:02:51.016 CC lib/keyring/keyring_rpc.o 00:02:51.016 CC lib/notify/notify.o 00:02:51.016 CC lib/notify/notify_rpc.o 00:02:51.277 LIB libspdk_notify.a 00:02:51.277 SO libspdk_notify.so.6.0 00:02:51.277 LIB libspdk_keyring.a 00:02:51.277 LIB libspdk_trace.a 00:02:51.538 SO libspdk_keyring.so.2.0 00:02:51.538 SO libspdk_trace.so.11.0 00:02:51.538 SYMLINK libspdk_notify.so 00:02:51.538 SYMLINK libspdk_keyring.so 00:02:51.538 SYMLINK libspdk_trace.so 00:02:51.800 CC lib/thread/thread.o 00:02:51.800 CC lib/sock/sock.o 00:02:51.800 CC lib/thread/iobuf.o 00:02:51.800 CC lib/sock/sock_rpc.o 00:02:52.374 LIB libspdk_sock.a 00:02:52.374 SO libspdk_sock.so.10.0 00:02:52.374 SYMLINK libspdk_sock.so 00:02:52.635 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:52.635 CC lib/nvme/nvme_ctrlr.o 00:02:52.635 CC lib/nvme/nvme_fabric.o 00:02:52.635 CC lib/nvme/nvme_ns_cmd.o 00:02:52.635 CC lib/nvme/nvme_ns.o 00:02:52.635 CC lib/nvme/nvme_pcie_common.o 00:02:52.635 CC lib/nvme/nvme_pcie.o 00:02:52.635 CC lib/nvme/nvme_qpair.o 00:02:52.635 CC lib/nvme/nvme.o 00:02:52.635 CC lib/nvme/nvme_quirks.o 00:02:52.635 CC lib/nvme/nvme_transport.o 00:02:52.635 CC lib/nvme/nvme_discovery.o 00:02:52.635 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:52.636 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:52.636 CC lib/nvme/nvme_tcp.o 00:02:52.636 CC lib/nvme/nvme_opal.o 00:02:52.636 CC lib/nvme/nvme_io_msg.o 00:02:52.636 CC lib/nvme/nvme_poll_group.o 00:02:52.636 CC lib/nvme/nvme_zns.o 00:02:52.636 CC lib/nvme/nvme_stubs.o 00:02:52.636 CC lib/nvme/nvme_auth.o 00:02:52.636 CC lib/nvme/nvme_cuse.o 00:02:52.636 CC lib/nvme/nvme_vfio_user.o 00:02:52.636 CC lib/nvme/nvme_rdma.o 00:02:53.210 LIB libspdk_thread.a 00:02:53.210 SO libspdk_thread.so.11.0 00:02:53.210 SYMLINK libspdk_thread.so 00:02:53.783 CC lib/fsdev/fsdev.o 00:02:53.783 CC lib/fsdev/fsdev_rpc.o 00:02:53.783 CC lib/fsdev/fsdev_io.o 00:02:53.783 CC lib/accel/accel.o 00:02:53.783 CC lib/accel/accel_rpc.o 00:02:53.783 CC lib/accel/accel_sw.o 00:02:53.783 CC lib/init/json_config.o 00:02:53.783 CC lib/init/subsystem.o 00:02:53.783 CC lib/init/subsystem_rpc.o 00:02:53.783 CC lib/init/rpc.o 00:02:53.783 CC lib/blob/blobstore.o 00:02:53.783 CC lib/blob/request.o 00:02:53.783 CC lib/vfu_tgt/tgt_endpoint.o 00:02:53.783 CC lib/blob/zeroes.o 00:02:53.783 CC lib/vfu_tgt/tgt_rpc.o 00:02:53.783 CC lib/blob/blob_bs_dev.o 00:02:53.783 CC lib/virtio/virtio.o 00:02:53.784 CC lib/virtio/virtio_vhost_user.o 00:02:53.784 CC lib/virtio/virtio_vfio_user.o 00:02:53.784 CC lib/virtio/virtio_pci.o 00:02:54.044 LIB libspdk_init.a 00:02:54.044 SO libspdk_init.so.6.0 00:02:54.044 LIB libspdk_virtio.a 00:02:54.044 LIB libspdk_vfu_tgt.a 00:02:54.044 SYMLINK libspdk_init.so 00:02:54.044 SO libspdk_vfu_tgt.so.3.0 00:02:54.044 SO libspdk_virtio.so.7.0 00:02:54.044 SYMLINK libspdk_virtio.so 00:02:54.044 SYMLINK libspdk_vfu_tgt.so 00:02:54.306 LIB libspdk_fsdev.a 00:02:54.306 SO libspdk_fsdev.so.2.0 00:02:54.306 CC lib/event/reactor.o 00:02:54.306 CC lib/event/app.o 00:02:54.306 CC lib/event/log_rpc.o 00:02:54.306 SYMLINK libspdk_fsdev.so 00:02:54.306 CC lib/event/app_rpc.o 00:02:54.306 CC lib/event/scheduler_static.o 00:02:54.567 LIB libspdk_accel.a 00:02:54.838 SO libspdk_accel.so.16.0 00:02:54.838 LIB libspdk_nvme.a 00:02:54.838 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:54.838 SYMLINK libspdk_accel.so 00:02:54.838 LIB libspdk_event.a 00:02:54.838 SO libspdk_nvme.so.15.0 00:02:54.838 SO libspdk_event.so.14.0 00:02:55.099 SYMLINK libspdk_event.so 00:02:55.099 SYMLINK libspdk_nvme.so 00:02:55.099 CC lib/bdev/bdev.o 00:02:55.099 CC lib/bdev/bdev_rpc.o 00:02:55.099 CC lib/bdev/bdev_zone.o 00:02:55.099 CC lib/bdev/part.o 00:02:55.099 CC lib/bdev/scsi_nvme.o 00:02:55.360 LIB libspdk_fuse_dispatcher.a 00:02:55.360 SO libspdk_fuse_dispatcher.so.1.0 00:02:55.622 SYMLINK libspdk_fuse_dispatcher.so 00:02:56.565 LIB libspdk_blob.a 00:02:56.565 SO libspdk_blob.so.11.0 00:02:56.565 SYMLINK libspdk_blob.so 00:02:56.825 CC lib/blobfs/blobfs.o 00:02:56.825 CC lib/blobfs/tree.o 00:02:56.825 CC lib/lvol/lvol.o 00:02:57.767 LIB libspdk_bdev.a 00:02:57.768 LIB libspdk_blobfs.a 00:02:57.768 SO libspdk_bdev.so.17.0 00:02:57.768 SO libspdk_blobfs.so.10.0 00:02:57.768 LIB libspdk_lvol.a 00:02:57.768 SYMLINK libspdk_blobfs.so 00:02:57.768 SYMLINK libspdk_bdev.so 00:02:57.768 SO libspdk_lvol.so.10.0 00:02:57.768 SYMLINK libspdk_lvol.so 00:02:58.028 CC lib/nvmf/ctrlr.o 00:02:58.028 CC lib/nvmf/ctrlr_discovery.o 00:02:58.028 CC lib/nvmf/ctrlr_bdev.o 00:02:58.028 CC lib/nvmf/subsystem.o 00:02:58.028 CC lib/nvmf/nvmf.o 00:02:58.028 CC lib/nvmf/nvmf_rpc.o 00:02:58.028 CC lib/nvmf/transport.o 00:02:58.028 CC lib/nvmf/tcp.o 00:02:58.028 CC lib/scsi/dev.o 00:02:58.028 CC lib/nvmf/stubs.o 00:02:58.028 CC lib/scsi/lun.o 00:02:58.028 CC lib/ftl/ftl_core.o 00:02:58.028 CC lib/nvmf/mdns_server.o 00:02:58.028 CC lib/scsi/port.o 00:02:58.028 CC lib/ftl/ftl_init.o 00:02:58.028 CC lib/nvmf/vfio_user.o 00:02:58.028 CC lib/ublk/ublk.o 00:02:58.028 CC lib/ftl/ftl_layout.o 00:02:58.028 CC lib/scsi/scsi.o 00:02:58.028 CC lib/nvmf/rdma.o 00:02:58.028 CC lib/nbd/nbd.o 00:02:58.028 CC lib/nvmf/auth.o 00:02:58.028 CC lib/ublk/ublk_rpc.o 00:02:58.028 CC lib/scsi/scsi_bdev.o 00:02:58.028 CC lib/ftl/ftl_debug.o 00:02:58.028 CC lib/scsi/scsi_pr.o 00:02:58.028 CC lib/nbd/nbd_rpc.o 00:02:58.028 CC lib/ftl/ftl_io.o 00:02:58.028 CC lib/scsi/scsi_rpc.o 00:02:58.028 CC lib/ftl/ftl_sb.o 00:02:58.028 CC lib/ftl/ftl_l2p.o 00:02:58.028 CC lib/scsi/task.o 00:02:58.028 CC lib/ftl/ftl_l2p_flat.o 00:02:58.028 CC lib/ftl/ftl_nv_cache.o 00:02:58.028 CC lib/ftl/ftl_band.o 00:02:58.028 CC lib/ftl/ftl_band_ops.o 00:02:58.028 CC lib/ftl/ftl_writer.o 00:02:58.028 CC lib/ftl/ftl_rq.o 00:02:58.028 CC lib/ftl/ftl_reloc.o 00:02:58.028 CC lib/ftl/ftl_l2p_cache.o 00:02:58.028 CC lib/ftl/ftl_p2l.o 00:02:58.028 CC lib/ftl/ftl_p2l_log.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:58.028 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:58.028 CC lib/ftl/utils/ftl_conf.o 00:02:58.028 CC lib/ftl/utils/ftl_md.o 00:02:58.028 CC lib/ftl/utils/ftl_mempool.o 00:02:58.028 CC lib/ftl/utils/ftl_property.o 00:02:58.028 CC lib/ftl/utils/ftl_bitmap.o 00:02:58.028 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:58.288 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:58.288 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:58.288 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:58.288 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:58.288 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:58.288 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:58.288 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:58.288 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:58.288 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:58.288 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:58.288 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:58.288 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:58.288 CC lib/ftl/base/ftl_base_dev.o 00:02:58.288 CC lib/ftl/base/ftl_base_bdev.o 00:02:58.288 CC lib/ftl/ftl_trace.o 00:02:58.860 LIB libspdk_nbd.a 00:02:58.860 SO libspdk_nbd.so.7.0 00:02:58.860 LIB libspdk_scsi.a 00:02:58.860 SYMLINK libspdk_nbd.so 00:02:58.860 SO libspdk_scsi.so.9.0 00:02:59.121 LIB libspdk_ublk.a 00:02:59.121 SYMLINK libspdk_scsi.so 00:02:59.121 SO libspdk_ublk.so.3.0 00:02:59.121 SYMLINK libspdk_ublk.so 00:02:59.385 LIB libspdk_ftl.a 00:02:59.385 CC lib/iscsi/conn.o 00:02:59.385 CC lib/iscsi/init_grp.o 00:02:59.385 CC lib/iscsi/iscsi.o 00:02:59.385 CC lib/vhost/vhost.o 00:02:59.385 CC lib/vhost/vhost_rpc.o 00:02:59.385 CC lib/iscsi/param.o 00:02:59.385 CC lib/iscsi/portal_grp.o 00:02:59.385 CC lib/vhost/vhost_scsi.o 00:02:59.385 CC lib/iscsi/tgt_node.o 00:02:59.385 CC lib/vhost/vhost_blk.o 00:02:59.385 CC lib/iscsi/iscsi_subsystem.o 00:02:59.385 CC lib/vhost/rte_vhost_user.o 00:02:59.385 CC lib/iscsi/iscsi_rpc.o 00:02:59.385 CC lib/iscsi/task.o 00:02:59.646 SO libspdk_ftl.so.9.0 00:02:59.907 SYMLINK libspdk_ftl.so 00:03:00.168 LIB libspdk_nvmf.a 00:03:00.431 SO libspdk_nvmf.so.20.0 00:03:00.431 LIB libspdk_vhost.a 00:03:00.431 SO libspdk_vhost.so.8.0 00:03:00.431 SYMLINK libspdk_nvmf.so 00:03:00.692 SYMLINK libspdk_vhost.so 00:03:00.692 LIB libspdk_iscsi.a 00:03:00.692 SO libspdk_iscsi.so.8.0 00:03:00.953 SYMLINK libspdk_iscsi.so 00:03:01.526 CC module/env_dpdk/env_dpdk_rpc.o 00:03:01.526 CC module/vfu_device/vfu_virtio.o 00:03:01.526 CC module/vfu_device/vfu_virtio_blk.o 00:03:01.526 CC module/vfu_device/vfu_virtio_scsi.o 00:03:01.526 CC module/vfu_device/vfu_virtio_rpc.o 00:03:01.526 CC module/vfu_device/vfu_virtio_fs.o 00:03:01.526 LIB libspdk_env_dpdk_rpc.a 00:03:01.526 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:01.787 CC module/accel/error/accel_error.o 00:03:01.787 CC module/accel/error/accel_error_rpc.o 00:03:01.787 CC module/keyring/file/keyring.o 00:03:01.787 CC module/blob/bdev/blob_bdev.o 00:03:01.787 CC module/keyring/file/keyring_rpc.o 00:03:01.787 CC module/accel/iaa/accel_iaa.o 00:03:01.787 CC module/accel/iaa/accel_iaa_rpc.o 00:03:01.787 CC module/fsdev/aio/fsdev_aio.o 00:03:01.787 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:01.787 CC module/accel/ioat/accel_ioat.o 00:03:01.787 CC module/fsdev/aio/linux_aio_mgr.o 00:03:01.787 CC module/scheduler/gscheduler/gscheduler.o 00:03:01.787 CC module/accel/dsa/accel_dsa.o 00:03:01.787 CC module/accel/ioat/accel_ioat_rpc.o 00:03:01.787 CC module/accel/dsa/accel_dsa_rpc.o 00:03:01.787 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:01.787 CC module/sock/posix/posix.o 00:03:01.787 CC module/keyring/linux/keyring.o 00:03:01.787 CC module/keyring/linux/keyring_rpc.o 00:03:01.787 SO libspdk_env_dpdk_rpc.so.6.0 00:03:01.787 SYMLINK libspdk_env_dpdk_rpc.so 00:03:01.787 LIB libspdk_keyring_file.a 00:03:01.787 LIB libspdk_keyring_linux.a 00:03:01.787 LIB libspdk_scheduler_dpdk_governor.a 00:03:01.787 LIB libspdk_scheduler_gscheduler.a 00:03:01.787 LIB libspdk_scheduler_dynamic.a 00:03:01.787 SO libspdk_keyring_file.so.2.0 00:03:01.787 LIB libspdk_accel_ioat.a 00:03:01.787 SO libspdk_keyring_linux.so.1.0 00:03:01.787 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:02.049 LIB libspdk_accel_error.a 00:03:02.049 SO libspdk_scheduler_gscheduler.so.4.0 00:03:02.049 LIB libspdk_accel_iaa.a 00:03:02.049 SO libspdk_scheduler_dynamic.so.4.0 00:03:02.049 SO libspdk_accel_ioat.so.6.0 00:03:02.049 SO libspdk_accel_error.so.2.0 00:03:02.049 SO libspdk_accel_iaa.so.3.0 00:03:02.049 LIB libspdk_blob_bdev.a 00:03:02.049 SYMLINK libspdk_keyring_file.so 00:03:02.049 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:02.049 SYMLINK libspdk_keyring_linux.so 00:03:02.049 LIB libspdk_accel_dsa.a 00:03:02.049 SYMLINK libspdk_scheduler_gscheduler.so 00:03:02.049 SO libspdk_blob_bdev.so.11.0 00:03:02.049 SYMLINK libspdk_scheduler_dynamic.so 00:03:02.049 SO libspdk_accel_dsa.so.5.0 00:03:02.049 SYMLINK libspdk_accel_ioat.so 00:03:02.049 SYMLINK libspdk_accel_error.so 00:03:02.049 SYMLINK libspdk_accel_iaa.so 00:03:02.049 LIB libspdk_vfu_device.a 00:03:02.049 SYMLINK libspdk_blob_bdev.so 00:03:02.049 SYMLINK libspdk_accel_dsa.so 00:03:02.049 SO libspdk_vfu_device.so.3.0 00:03:02.309 SYMLINK libspdk_vfu_device.so 00:03:02.309 LIB libspdk_fsdev_aio.a 00:03:02.309 SO libspdk_fsdev_aio.so.1.0 00:03:02.309 LIB libspdk_sock_posix.a 00:03:02.570 SO libspdk_sock_posix.so.6.0 00:03:02.570 SYMLINK libspdk_fsdev_aio.so 00:03:02.570 SYMLINK libspdk_sock_posix.so 00:03:02.570 CC module/bdev/gpt/gpt.o 00:03:02.570 CC module/bdev/gpt/vbdev_gpt.o 00:03:02.570 CC module/bdev/null/bdev_null.o 00:03:02.570 CC module/bdev/null/bdev_null_rpc.o 00:03:02.570 CC module/bdev/error/vbdev_error.o 00:03:02.570 CC module/bdev/malloc/bdev_malloc.o 00:03:02.570 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:02.571 CC module/bdev/error/vbdev_error_rpc.o 00:03:02.571 CC module/bdev/delay/vbdev_delay.o 00:03:02.571 CC module/bdev/lvol/vbdev_lvol.o 00:03:02.571 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:02.571 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:02.571 CC module/bdev/aio/bdev_aio.o 00:03:02.571 CC module/bdev/aio/bdev_aio_rpc.o 00:03:02.571 CC module/bdev/passthru/vbdev_passthru.o 00:03:02.571 CC module/bdev/nvme/bdev_nvme.o 00:03:02.571 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:02.571 CC module/bdev/ftl/bdev_ftl.o 00:03:02.571 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:02.571 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:02.571 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:02.571 CC module/bdev/raid/bdev_raid.o 00:03:02.571 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:02.571 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:02.571 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:02.571 CC module/bdev/raid/bdev_raid_rpc.o 00:03:02.571 CC module/bdev/nvme/nvme_rpc.o 00:03:02.571 CC module/blobfs/bdev/blobfs_bdev.o 00:03:02.571 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:02.571 CC module/bdev/raid/bdev_raid_sb.o 00:03:02.571 CC module/bdev/nvme/bdev_mdns_client.o 00:03:02.571 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:02.571 CC module/bdev/raid/raid0.o 00:03:02.571 CC module/bdev/iscsi/bdev_iscsi.o 00:03:02.571 CC module/bdev/nvme/vbdev_opal.o 00:03:02.571 CC module/bdev/raid/raid1.o 00:03:02.571 CC module/bdev/split/vbdev_split.o 00:03:02.571 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:02.571 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:02.571 CC module/bdev/raid/concat.o 00:03:02.571 CC module/bdev/split/vbdev_split_rpc.o 00:03:02.571 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:03.143 LIB libspdk_blobfs_bdev.a 00:03:03.143 LIB libspdk_bdev_error.a 00:03:03.143 LIB libspdk_bdev_gpt.a 00:03:03.143 LIB libspdk_bdev_null.a 00:03:03.143 SO libspdk_blobfs_bdev.so.6.0 00:03:03.143 SO libspdk_bdev_error.so.6.0 00:03:03.143 LIB libspdk_bdev_split.a 00:03:03.143 SO libspdk_bdev_null.so.6.0 00:03:03.143 SO libspdk_bdev_gpt.so.6.0 00:03:03.143 LIB libspdk_bdev_ftl.a 00:03:03.143 SYMLINK libspdk_blobfs_bdev.so 00:03:03.143 SO libspdk_bdev_split.so.6.0 00:03:03.143 LIB libspdk_bdev_malloc.a 00:03:03.143 LIB libspdk_bdev_passthru.a 00:03:03.143 SO libspdk_bdev_ftl.so.6.0 00:03:03.143 SYMLINK libspdk_bdev_error.so 00:03:03.143 LIB libspdk_bdev_aio.a 00:03:03.143 SYMLINK libspdk_bdev_null.so 00:03:03.143 LIB libspdk_bdev_zone_block.a 00:03:03.143 LIB libspdk_bdev_delay.a 00:03:03.143 SYMLINK libspdk_bdev_gpt.so 00:03:03.143 SO libspdk_bdev_malloc.so.6.0 00:03:03.143 SO libspdk_bdev_passthru.so.6.0 00:03:03.143 SYMLINK libspdk_bdev_split.so 00:03:03.143 SO libspdk_bdev_aio.so.6.0 00:03:03.143 LIB libspdk_bdev_iscsi.a 00:03:03.143 SO libspdk_bdev_delay.so.6.0 00:03:03.143 SO libspdk_bdev_zone_block.so.6.0 00:03:03.143 SYMLINK libspdk_bdev_ftl.so 00:03:03.143 SO libspdk_bdev_iscsi.so.6.0 00:03:03.143 SYMLINK libspdk_bdev_passthru.so 00:03:03.143 SYMLINK libspdk_bdev_malloc.so 00:03:03.143 SYMLINK libspdk_bdev_aio.so 00:03:03.143 LIB libspdk_bdev_lvol.a 00:03:03.143 SYMLINK libspdk_bdev_delay.so 00:03:03.143 SYMLINK libspdk_bdev_zone_block.so 00:03:03.405 SYMLINK libspdk_bdev_iscsi.so 00:03:03.405 SO libspdk_bdev_lvol.so.6.0 00:03:03.405 LIB libspdk_bdev_virtio.a 00:03:03.405 SO libspdk_bdev_virtio.so.6.0 00:03:03.405 SYMLINK libspdk_bdev_lvol.so 00:03:03.405 SYMLINK libspdk_bdev_virtio.so 00:03:03.666 LIB libspdk_bdev_raid.a 00:03:03.666 SO libspdk_bdev_raid.so.6.0 00:03:03.928 SYMLINK libspdk_bdev_raid.so 00:03:05.315 LIB libspdk_bdev_nvme.a 00:03:05.315 SO libspdk_bdev_nvme.so.7.1 00:03:05.315 SYMLINK libspdk_bdev_nvme.so 00:03:05.888 CC module/event/subsystems/iobuf/iobuf.o 00:03:05.888 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:05.888 CC module/event/subsystems/vmd/vmd.o 00:03:05.888 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:05.888 CC module/event/subsystems/sock/sock.o 00:03:05.888 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:05.888 CC module/event/subsystems/fsdev/fsdev.o 00:03:05.888 CC module/event/subsystems/keyring/keyring.o 00:03:05.888 CC module/event/subsystems/scheduler/scheduler.o 00:03:05.888 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:06.149 LIB libspdk_event_keyring.a 00:03:06.149 LIB libspdk_event_vfu_tgt.a 00:03:06.149 LIB libspdk_event_iobuf.a 00:03:06.149 LIB libspdk_event_vhost_blk.a 00:03:06.149 LIB libspdk_event_fsdev.a 00:03:06.149 LIB libspdk_event_vmd.a 00:03:06.149 LIB libspdk_event_scheduler.a 00:03:06.149 LIB libspdk_event_sock.a 00:03:06.149 SO libspdk_event_keyring.so.1.0 00:03:06.149 SO libspdk_event_vfu_tgt.so.3.0 00:03:06.149 SO libspdk_event_vhost_blk.so.3.0 00:03:06.149 SO libspdk_event_fsdev.so.1.0 00:03:06.149 SO libspdk_event_iobuf.so.3.0 00:03:06.149 SO libspdk_event_scheduler.so.4.0 00:03:06.149 SO libspdk_event_vmd.so.6.0 00:03:06.149 SO libspdk_event_sock.so.5.0 00:03:06.149 SYMLINK libspdk_event_keyring.so 00:03:06.149 SYMLINK libspdk_event_vhost_blk.so 00:03:06.149 SYMLINK libspdk_event_fsdev.so 00:03:06.149 SYMLINK libspdk_event_vfu_tgt.so 00:03:06.149 SYMLINK libspdk_event_iobuf.so 00:03:06.149 SYMLINK libspdk_event_scheduler.so 00:03:06.149 SYMLINK libspdk_event_sock.so 00:03:06.149 SYMLINK libspdk_event_vmd.so 00:03:06.720 CC module/event/subsystems/accel/accel.o 00:03:06.720 LIB libspdk_event_accel.a 00:03:06.720 SO libspdk_event_accel.so.6.0 00:03:06.720 SYMLINK libspdk_event_accel.so 00:03:07.292 CC module/event/subsystems/bdev/bdev.o 00:03:07.293 LIB libspdk_event_bdev.a 00:03:07.293 SO libspdk_event_bdev.so.6.0 00:03:07.554 SYMLINK libspdk_event_bdev.so 00:03:07.816 CC module/event/subsystems/scsi/scsi.o 00:03:07.816 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:07.816 CC module/event/subsystems/ublk/ublk.o 00:03:07.816 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:07.816 CC module/event/subsystems/nbd/nbd.o 00:03:08.077 LIB libspdk_event_ublk.a 00:03:08.077 LIB libspdk_event_nbd.a 00:03:08.077 LIB libspdk_event_scsi.a 00:03:08.077 SO libspdk_event_ublk.so.3.0 00:03:08.077 SO libspdk_event_nbd.so.6.0 00:03:08.077 SO libspdk_event_scsi.so.6.0 00:03:08.077 LIB libspdk_event_nvmf.a 00:03:08.077 SYMLINK libspdk_event_ublk.so 00:03:08.077 SYMLINK libspdk_event_nbd.so 00:03:08.077 SYMLINK libspdk_event_scsi.so 00:03:08.077 SO libspdk_event_nvmf.so.6.0 00:03:08.338 SYMLINK libspdk_event_nvmf.so 00:03:08.599 CC module/event/subsystems/iscsi/iscsi.o 00:03:08.599 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:08.599 LIB libspdk_event_vhost_scsi.a 00:03:08.599 LIB libspdk_event_iscsi.a 00:03:08.599 SO libspdk_event_vhost_scsi.so.3.0 00:03:08.599 SO libspdk_event_iscsi.so.6.0 00:03:08.860 SYMLINK libspdk_event_vhost_scsi.so 00:03:08.860 SYMLINK libspdk_event_iscsi.so 00:03:08.860 SO libspdk.so.6.0 00:03:08.860 SYMLINK libspdk.so 00:03:09.433 CXX app/trace/trace.o 00:03:09.433 CC app/trace_record/trace_record.o 00:03:09.433 TEST_HEADER include/spdk/accel.h 00:03:09.433 TEST_HEADER include/spdk/accel_module.h 00:03:09.433 CC app/spdk_nvme_discover/discovery_aer.o 00:03:09.433 TEST_HEADER include/spdk/assert.h 00:03:09.433 TEST_HEADER include/spdk/barrier.h 00:03:09.433 CC app/spdk_top/spdk_top.o 00:03:09.433 TEST_HEADER include/spdk/bdev.h 00:03:09.433 CC app/spdk_nvme_perf/perf.o 00:03:09.433 TEST_HEADER include/spdk/base64.h 00:03:09.433 TEST_HEADER include/spdk/bdev_module.h 00:03:09.434 CC test/rpc_client/rpc_client_test.o 00:03:09.434 TEST_HEADER include/spdk/bdev_zone.h 00:03:09.434 TEST_HEADER include/spdk/bit_array.h 00:03:09.434 TEST_HEADER include/spdk/blob_bdev.h 00:03:09.434 CC app/spdk_nvme_identify/identify.o 00:03:09.434 CC app/spdk_lspci/spdk_lspci.o 00:03:09.434 TEST_HEADER include/spdk/bit_pool.h 00:03:09.434 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:09.434 TEST_HEADER include/spdk/blobfs.h 00:03:09.434 TEST_HEADER include/spdk/blob.h 00:03:09.434 TEST_HEADER include/spdk/config.h 00:03:09.434 TEST_HEADER include/spdk/cpuset.h 00:03:09.434 TEST_HEADER include/spdk/conf.h 00:03:09.434 TEST_HEADER include/spdk/crc16.h 00:03:09.434 TEST_HEADER include/spdk/crc32.h 00:03:09.434 TEST_HEADER include/spdk/crc64.h 00:03:09.434 TEST_HEADER include/spdk/dif.h 00:03:09.434 TEST_HEADER include/spdk/dma.h 00:03:09.434 TEST_HEADER include/spdk/endian.h 00:03:09.434 TEST_HEADER include/spdk/env_dpdk.h 00:03:09.434 TEST_HEADER include/spdk/env.h 00:03:09.434 TEST_HEADER include/spdk/event.h 00:03:09.434 TEST_HEADER include/spdk/fd_group.h 00:03:09.434 TEST_HEADER include/spdk/file.h 00:03:09.434 TEST_HEADER include/spdk/fd.h 00:03:09.434 TEST_HEADER include/spdk/fsdev.h 00:03:09.434 CC app/spdk_dd/spdk_dd.o 00:03:09.434 TEST_HEADER include/spdk/fsdev_module.h 00:03:09.434 TEST_HEADER include/spdk/ftl.h 00:03:09.434 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:09.434 TEST_HEADER include/spdk/gpt_spec.h 00:03:09.434 TEST_HEADER include/spdk/hexlify.h 00:03:09.434 CC app/nvmf_tgt/nvmf_main.o 00:03:09.434 TEST_HEADER include/spdk/histogram_data.h 00:03:09.434 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:09.434 TEST_HEADER include/spdk/idxd.h 00:03:09.434 TEST_HEADER include/spdk/init.h 00:03:09.434 TEST_HEADER include/spdk/idxd_spec.h 00:03:09.434 TEST_HEADER include/spdk/ioat_spec.h 00:03:09.434 TEST_HEADER include/spdk/ioat.h 00:03:09.434 TEST_HEADER include/spdk/iscsi_spec.h 00:03:09.434 TEST_HEADER include/spdk/json.h 00:03:09.434 TEST_HEADER include/spdk/keyring.h 00:03:09.434 TEST_HEADER include/spdk/jsonrpc.h 00:03:09.434 TEST_HEADER include/spdk/keyring_module.h 00:03:09.434 CC app/iscsi_tgt/iscsi_tgt.o 00:03:09.434 TEST_HEADER include/spdk/log.h 00:03:09.434 TEST_HEADER include/spdk/likely.h 00:03:09.434 TEST_HEADER include/spdk/md5.h 00:03:09.434 TEST_HEADER include/spdk/memory.h 00:03:09.434 TEST_HEADER include/spdk/lvol.h 00:03:09.434 TEST_HEADER include/spdk/mmio.h 00:03:09.434 TEST_HEADER include/spdk/nbd.h 00:03:09.434 TEST_HEADER include/spdk/net.h 00:03:09.434 TEST_HEADER include/spdk/notify.h 00:03:09.434 TEST_HEADER include/spdk/nvme.h 00:03:09.434 TEST_HEADER include/spdk/nvme_intel.h 00:03:09.434 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:09.434 CC app/spdk_tgt/spdk_tgt.o 00:03:09.434 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:09.434 TEST_HEADER include/spdk/nvme_spec.h 00:03:09.434 TEST_HEADER include/spdk/nvme_zns.h 00:03:09.434 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:09.434 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:09.434 TEST_HEADER include/spdk/nvmf.h 00:03:09.434 TEST_HEADER include/spdk/nvmf_spec.h 00:03:09.434 TEST_HEADER include/spdk/opal_spec.h 00:03:09.434 TEST_HEADER include/spdk/nvmf_transport.h 00:03:09.434 TEST_HEADER include/spdk/opal.h 00:03:09.434 TEST_HEADER include/spdk/pci_ids.h 00:03:09.434 TEST_HEADER include/spdk/pipe.h 00:03:09.434 TEST_HEADER include/spdk/queue.h 00:03:09.434 TEST_HEADER include/spdk/reduce.h 00:03:09.434 TEST_HEADER include/spdk/rpc.h 00:03:09.434 TEST_HEADER include/spdk/scheduler.h 00:03:09.434 TEST_HEADER include/spdk/scsi.h 00:03:09.434 TEST_HEADER include/spdk/scsi_spec.h 00:03:09.434 TEST_HEADER include/spdk/sock.h 00:03:09.434 TEST_HEADER include/spdk/stdinc.h 00:03:09.434 TEST_HEADER include/spdk/string.h 00:03:09.434 TEST_HEADER include/spdk/trace.h 00:03:09.434 TEST_HEADER include/spdk/thread.h 00:03:09.434 TEST_HEADER include/spdk/trace_parser.h 00:03:09.434 TEST_HEADER include/spdk/tree.h 00:03:09.434 TEST_HEADER include/spdk/util.h 00:03:09.434 TEST_HEADER include/spdk/ublk.h 00:03:09.434 TEST_HEADER include/spdk/uuid.h 00:03:09.434 TEST_HEADER include/spdk/version.h 00:03:09.434 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:09.434 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:09.434 TEST_HEADER include/spdk/vhost.h 00:03:09.434 TEST_HEADER include/spdk/vmd.h 00:03:09.434 TEST_HEADER include/spdk/xor.h 00:03:09.434 TEST_HEADER include/spdk/zipf.h 00:03:09.434 CXX test/cpp_headers/accel.o 00:03:09.434 CXX test/cpp_headers/accel_module.o 00:03:09.434 CXX test/cpp_headers/barrier.o 00:03:09.434 CXX test/cpp_headers/assert.o 00:03:09.434 CXX test/cpp_headers/base64.o 00:03:09.434 CXX test/cpp_headers/bdev_zone.o 00:03:09.434 CXX test/cpp_headers/bdev.o 00:03:09.434 CXX test/cpp_headers/bdev_module.o 00:03:09.434 CXX test/cpp_headers/bit_array.o 00:03:09.434 CXX test/cpp_headers/bit_pool.o 00:03:09.434 CXX test/cpp_headers/blob_bdev.o 00:03:09.434 CXX test/cpp_headers/blobfs.o 00:03:09.434 CXX test/cpp_headers/blob.o 00:03:09.434 CXX test/cpp_headers/blobfs_bdev.o 00:03:09.703 CXX test/cpp_headers/conf.o 00:03:09.703 CXX test/cpp_headers/config.o 00:03:09.703 CXX test/cpp_headers/cpuset.o 00:03:09.703 CXX test/cpp_headers/crc16.o 00:03:09.703 CXX test/cpp_headers/crc32.o 00:03:09.703 CXX test/cpp_headers/crc64.o 00:03:09.703 CXX test/cpp_headers/dif.o 00:03:09.703 CXX test/cpp_headers/dma.o 00:03:09.703 CXX test/cpp_headers/env_dpdk.o 00:03:09.703 CXX test/cpp_headers/endian.o 00:03:09.703 CXX test/cpp_headers/event.o 00:03:09.703 CXX test/cpp_headers/env.o 00:03:09.703 CXX test/cpp_headers/fd_group.o 00:03:09.703 CXX test/cpp_headers/file.o 00:03:09.703 CXX test/cpp_headers/fd.o 00:03:09.703 CXX test/cpp_headers/fsdev.o 00:03:09.703 CXX test/cpp_headers/fsdev_module.o 00:03:09.703 CXX test/cpp_headers/ftl.o 00:03:09.703 CXX test/cpp_headers/fuse_dispatcher.o 00:03:09.703 CXX test/cpp_headers/gpt_spec.o 00:03:09.703 CXX test/cpp_headers/hexlify.o 00:03:09.703 CXX test/cpp_headers/histogram_data.o 00:03:09.703 CXX test/cpp_headers/idxd_spec.o 00:03:09.703 CXX test/cpp_headers/idxd.o 00:03:09.703 CXX test/cpp_headers/ioat_spec.o 00:03:09.703 CXX test/cpp_headers/init.o 00:03:09.703 CXX test/cpp_headers/json.o 00:03:09.703 CXX test/cpp_headers/ioat.o 00:03:09.703 CXX test/cpp_headers/iscsi_spec.o 00:03:09.703 CXX test/cpp_headers/jsonrpc.o 00:03:09.703 CXX test/cpp_headers/keyring_module.o 00:03:09.703 CXX test/cpp_headers/likely.o 00:03:09.703 CXX test/cpp_headers/keyring.o 00:03:09.703 CXX test/cpp_headers/log.o 00:03:09.703 CXX test/cpp_headers/lvol.o 00:03:09.703 CXX test/cpp_headers/nbd.o 00:03:09.703 CXX test/cpp_headers/memory.o 00:03:09.703 CXX test/cpp_headers/mmio.o 00:03:09.703 CXX test/cpp_headers/md5.o 00:03:09.703 CXX test/cpp_headers/net.o 00:03:09.703 CXX test/cpp_headers/notify.o 00:03:09.704 CXX test/cpp_headers/nvme_intel.o 00:03:09.704 CXX test/cpp_headers/nvme.o 00:03:09.704 CXX test/cpp_headers/nvme_ocssd.o 00:03:09.704 CXX test/cpp_headers/nvme_zns.o 00:03:09.704 CXX test/cpp_headers/nvme_spec.o 00:03:09.704 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:09.704 CXX test/cpp_headers/nvmf_cmd.o 00:03:09.704 CXX test/cpp_headers/nvmf_transport.o 00:03:09.704 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:09.704 CXX test/cpp_headers/opal.o 00:03:09.704 CXX test/cpp_headers/nvmf.o 00:03:09.704 CXX test/cpp_headers/nvmf_spec.o 00:03:09.704 CXX test/cpp_headers/pci_ids.o 00:03:09.704 CXX test/cpp_headers/opal_spec.o 00:03:09.704 CXX test/cpp_headers/queue.o 00:03:09.704 CXX test/cpp_headers/pipe.o 00:03:09.704 CXX test/cpp_headers/reduce.o 00:03:09.704 CXX test/cpp_headers/rpc.o 00:03:09.704 CC examples/util/zipf/zipf.o 00:03:09.704 CC test/app/stub/stub.o 00:03:09.704 CXX test/cpp_headers/scheduler.o 00:03:09.704 CXX test/cpp_headers/scsi_spec.o 00:03:09.704 CXX test/cpp_headers/scsi.o 00:03:09.704 CXX test/cpp_headers/sock.o 00:03:09.704 CXX test/cpp_headers/stdinc.o 00:03:09.704 CC examples/ioat/verify/verify.o 00:03:09.704 CXX test/cpp_headers/string.o 00:03:09.704 CXX test/cpp_headers/thread.o 00:03:09.704 CXX test/cpp_headers/trace_parser.o 00:03:09.704 CC test/thread/poller_perf/poller_perf.o 00:03:09.704 CXX test/cpp_headers/tree.o 00:03:09.704 CXX test/cpp_headers/trace.o 00:03:09.704 CC examples/ioat/perf/perf.o 00:03:09.704 CXX test/cpp_headers/ublk.o 00:03:09.704 CC test/app/histogram_perf/histogram_perf.o 00:03:09.704 CXX test/cpp_headers/uuid.o 00:03:09.704 CXX test/cpp_headers/util.o 00:03:09.704 CXX test/cpp_headers/vfio_user_pci.o 00:03:09.704 CXX test/cpp_headers/version.o 00:03:09.704 CXX test/cpp_headers/vhost.o 00:03:09.704 CC app/fio/nvme/fio_plugin.o 00:03:09.704 CXX test/cpp_headers/vfio_user_spec.o 00:03:09.704 CXX test/cpp_headers/zipf.o 00:03:09.704 CXX test/cpp_headers/vmd.o 00:03:09.704 CXX test/cpp_headers/xor.o 00:03:09.704 CC test/env/vtophys/vtophys.o 00:03:09.704 CC test/app/jsoncat/jsoncat.o 00:03:09.704 CC test/env/memory/memory_ut.o 00:03:09.704 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:09.704 LINK spdk_lspci 00:03:09.704 CC test/dma/test_dma/test_dma.o 00:03:09.704 CC test/env/pci/pci_ut.o 00:03:09.970 CC test/app/bdev_svc/bdev_svc.o 00:03:09.970 CC app/fio/bdev/fio_plugin.o 00:03:09.970 LINK spdk_nvme_discover 00:03:09.970 LINK nvmf_tgt 00:03:09.970 LINK rpc_client_test 00:03:10.238 LINK interrupt_tgt 00:03:10.238 LINK spdk_trace_record 00:03:10.238 LINK iscsi_tgt 00:03:10.501 LINK spdk_tgt 00:03:10.501 CC test/env/mem_callbacks/mem_callbacks.o 00:03:10.501 LINK env_dpdk_post_init 00:03:10.501 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:10.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:10.501 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:10.501 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:10.760 LINK spdk_dd 00:03:10.760 LINK histogram_perf 00:03:10.760 LINK poller_perf 00:03:11.022 LINK zipf 00:03:11.022 LINK jsoncat 00:03:11.022 LINK vtophys 00:03:11.022 LINK pci_ut 00:03:11.022 LINK bdev_svc 00:03:11.022 LINK ioat_perf 00:03:11.022 LINK stub 00:03:11.022 LINK spdk_trace 00:03:11.022 LINK verify 00:03:11.283 LINK vhost_fuzz 00:03:11.283 LINK nvme_fuzz 00:03:11.283 CC test/event/reactor_perf/reactor_perf.o 00:03:11.283 CC test/event/event_perf/event_perf.o 00:03:11.283 LINK test_dma 00:03:11.283 CC test/event/reactor/reactor.o 00:03:11.283 CC test/event/app_repeat/app_repeat.o 00:03:11.283 LINK spdk_top 00:03:11.283 LINK spdk_bdev 00:03:11.283 LINK spdk_nvme 00:03:11.283 LINK spdk_nvme_perf 00:03:11.283 CC test/event/scheduler/scheduler.o 00:03:11.543 LINK mem_callbacks 00:03:11.543 LINK spdk_nvme_identify 00:03:11.543 CC examples/idxd/perf/perf.o 00:03:11.543 CC examples/sock/hello_world/hello_sock.o 00:03:11.543 CC examples/vmd/led/led.o 00:03:11.543 CC examples/vmd/lsvmd/lsvmd.o 00:03:11.543 CC app/vhost/vhost.o 00:03:11.543 LINK reactor_perf 00:03:11.543 LINK reactor 00:03:11.543 LINK event_perf 00:03:11.543 LINK app_repeat 00:03:11.543 CC examples/thread/thread/thread_ex.o 00:03:11.543 LINK led 00:03:11.543 LINK lsvmd 00:03:11.543 LINK scheduler 00:03:11.803 LINK vhost 00:03:11.804 LINK idxd_perf 00:03:11.804 LINK hello_sock 00:03:11.804 LINK thread 00:03:11.804 CC test/nvme/overhead/overhead.o 00:03:11.804 CC test/nvme/e2edp/nvme_dp.o 00:03:11.804 LINK memory_ut 00:03:11.804 CC test/nvme/compliance/nvme_compliance.o 00:03:11.804 CC test/nvme/connect_stress/connect_stress.o 00:03:11.804 CC test/nvme/boot_partition/boot_partition.o 00:03:11.804 CC test/nvme/reserve/reserve.o 00:03:11.804 CC test/nvme/sgl/sgl.o 00:03:11.804 CC test/nvme/startup/startup.o 00:03:11.804 CC test/nvme/fdp/fdp.o 00:03:11.804 CC test/nvme/cuse/cuse.o 00:03:11.804 CC test/nvme/reset/reset.o 00:03:11.804 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:11.804 CC test/nvme/err_injection/err_injection.o 00:03:11.804 CC test/nvme/aer/aer.o 00:03:11.804 CC test/nvme/fused_ordering/fused_ordering.o 00:03:11.804 CC test/nvme/simple_copy/simple_copy.o 00:03:12.064 CC test/blobfs/mkfs/mkfs.o 00:03:12.064 CC test/accel/dif/dif.o 00:03:12.064 CC test/lvol/esnap/esnap.o 00:03:12.064 LINK connect_stress 00:03:12.064 LINK startup 00:03:12.064 LINK boot_partition 00:03:12.064 LINK err_injection 00:03:12.064 LINK doorbell_aers 00:03:12.064 LINK reserve 00:03:12.064 LINK overhead 00:03:12.064 LINK fused_ordering 00:03:12.064 LINK nvme_dp 00:03:12.064 LINK simple_copy 00:03:12.325 LINK mkfs 00:03:12.325 LINK aer 00:03:12.325 LINK sgl 00:03:12.325 LINK reset 00:03:12.325 LINK nvme_compliance 00:03:12.325 CC examples/nvme/hello_world/hello_world.o 00:03:12.325 LINK fdp 00:03:12.325 CC examples/nvme/reconnect/reconnect.o 00:03:12.325 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:12.325 CC examples/nvme/arbitration/arbitration.o 00:03:12.325 LINK iscsi_fuzz 00:03:12.325 CC examples/nvme/hotplug/hotplug.o 00:03:12.325 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:12.325 CC examples/nvme/abort/abort.o 00:03:12.325 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:12.325 CC examples/accel/perf/accel_perf.o 00:03:12.325 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:12.325 CC examples/blob/cli/blobcli.o 00:03:12.325 CC examples/blob/hello_world/hello_blob.o 00:03:12.586 LINK cmb_copy 00:03:12.587 LINK hello_world 00:03:12.587 LINK pmr_persistence 00:03:12.587 LINK hotplug 00:03:12.587 LINK arbitration 00:03:12.587 LINK reconnect 00:03:12.587 LINK nvme_manage 00:03:12.587 LINK abort 00:03:12.587 LINK hello_blob 00:03:12.587 LINK dif 00:03:12.587 LINK hello_fsdev 00:03:12.848 LINK accel_perf 00:03:12.848 LINK blobcli 00:03:13.107 LINK cuse 00:03:13.368 CC test/bdev/bdevio/bdevio.o 00:03:13.368 CC examples/bdev/hello_world/hello_bdev.o 00:03:13.368 CC examples/bdev/bdevperf/bdevperf.o 00:03:13.627 LINK hello_bdev 00:03:13.627 LINK bdevio 00:03:14.196 LINK bdevperf 00:03:14.766 CC examples/nvmf/nvmf/nvmf.o 00:03:15.026 LINK nvmf 00:03:15.967 LINK esnap 00:03:16.541 00:03:16.541 real 0m56.422s 00:03:16.541 user 8m6.272s 00:03:16.541 sys 6m2.038s 00:03:16.541 11:26:41 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:16.541 11:26:41 make -- common/autotest_common.sh@10 -- $ set +x 00:03:16.541 ************************************ 00:03:16.541 END TEST make 00:03:16.541 ************************************ 00:03:16.541 11:26:41 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:16.541 11:26:41 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:16.541 11:26:41 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:16.541 11:26:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.541 11:26:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:16.541 11:26:41 -- pm/common@44 -- $ pid=743877 00:03:16.541 11:26:41 -- pm/common@50 -- $ kill -TERM 743877 00:03:16.541 11:26:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.541 11:26:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:16.541 11:26:41 -- pm/common@44 -- $ pid=743878 00:03:16.541 11:26:41 -- pm/common@50 -- $ kill -TERM 743878 00:03:16.541 11:26:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.541 11:26:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:16.541 11:26:41 -- pm/common@44 -- $ pid=743880 00:03:16.541 11:26:41 -- pm/common@50 -- $ kill -TERM 743880 00:03:16.541 11:26:41 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.541 11:26:41 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:16.541 11:26:41 -- pm/common@44 -- $ pid=743904 00:03:16.541 11:26:41 -- pm/common@50 -- $ sudo -E kill -TERM 743904 00:03:16.541 11:26:41 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:16.541 11:26:41 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:16.541 11:26:41 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:16.541 11:26:41 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:16.541 11:26:41 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:16.541 11:26:42 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:16.541 11:26:42 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:16.541 11:26:42 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:16.541 11:26:42 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:16.541 11:26:42 -- scripts/common.sh@336 -- # IFS=.-: 00:03:16.541 11:26:42 -- scripts/common.sh@336 -- # read -ra ver1 00:03:16.541 11:26:42 -- scripts/common.sh@337 -- # IFS=.-: 00:03:16.541 11:26:42 -- scripts/common.sh@337 -- # read -ra ver2 00:03:16.541 11:26:42 -- scripts/common.sh@338 -- # local 'op=<' 00:03:16.541 11:26:42 -- scripts/common.sh@340 -- # ver1_l=2 00:03:16.541 11:26:42 -- scripts/common.sh@341 -- # ver2_l=1 00:03:16.541 11:26:42 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:16.541 11:26:42 -- scripts/common.sh@344 -- # case "$op" in 00:03:16.541 11:26:42 -- scripts/common.sh@345 -- # : 1 00:03:16.541 11:26:42 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:16.541 11:26:42 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:16.541 11:26:42 -- scripts/common.sh@365 -- # decimal 1 00:03:16.541 11:26:42 -- scripts/common.sh@353 -- # local d=1 00:03:16.541 11:26:42 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:16.541 11:26:42 -- scripts/common.sh@355 -- # echo 1 00:03:16.804 11:26:42 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:16.804 11:26:42 -- scripts/common.sh@366 -- # decimal 2 00:03:16.804 11:26:42 -- scripts/common.sh@353 -- # local d=2 00:03:16.804 11:26:42 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:16.804 11:26:42 -- scripts/common.sh@355 -- # echo 2 00:03:16.804 11:26:42 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:16.804 11:26:42 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:16.804 11:26:42 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:16.804 11:26:42 -- scripts/common.sh@368 -- # return 0 00:03:16.804 11:26:42 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:16.804 11:26:42 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:16.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.805 --rc genhtml_branch_coverage=1 00:03:16.805 --rc genhtml_function_coverage=1 00:03:16.805 --rc genhtml_legend=1 00:03:16.805 --rc geninfo_all_blocks=1 00:03:16.805 --rc geninfo_unexecuted_blocks=1 00:03:16.805 00:03:16.805 ' 00:03:16.805 11:26:42 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:16.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.805 --rc genhtml_branch_coverage=1 00:03:16.805 --rc genhtml_function_coverage=1 00:03:16.805 --rc genhtml_legend=1 00:03:16.805 --rc geninfo_all_blocks=1 00:03:16.805 --rc geninfo_unexecuted_blocks=1 00:03:16.805 00:03:16.805 ' 00:03:16.805 11:26:42 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:16.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.805 --rc genhtml_branch_coverage=1 00:03:16.805 --rc genhtml_function_coverage=1 00:03:16.805 --rc genhtml_legend=1 00:03:16.805 --rc geninfo_all_blocks=1 00:03:16.805 --rc geninfo_unexecuted_blocks=1 00:03:16.805 00:03:16.805 ' 00:03:16.805 11:26:42 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:16.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:16.805 --rc genhtml_branch_coverage=1 00:03:16.805 --rc genhtml_function_coverage=1 00:03:16.805 --rc genhtml_legend=1 00:03:16.805 --rc geninfo_all_blocks=1 00:03:16.805 --rc geninfo_unexecuted_blocks=1 00:03:16.805 00:03:16.805 ' 00:03:16.805 11:26:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:16.805 11:26:42 -- nvmf/common.sh@7 -- # uname -s 00:03:16.805 11:26:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:16.805 11:26:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:16.805 11:26:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:16.805 11:26:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:16.805 11:26:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:16.805 11:26:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:16.805 11:26:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:16.805 11:26:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:16.805 11:26:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:16.805 11:26:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:16.805 11:26:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:16.805 11:26:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:16.805 11:26:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:16.805 11:26:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:16.805 11:26:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:16.805 11:26:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:16.805 11:26:42 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:16.805 11:26:42 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:16.805 11:26:42 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:16.805 11:26:42 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:16.805 11:26:42 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:16.805 11:26:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.805 11:26:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.805 11:26:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.805 11:26:42 -- paths/export.sh@5 -- # export PATH 00:03:16.805 11:26:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:16.805 11:26:42 -- nvmf/common.sh@51 -- # : 0 00:03:16.805 11:26:42 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:16.805 11:26:42 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:16.805 11:26:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:16.805 11:26:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:16.805 11:26:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:16.805 11:26:42 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:16.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:16.805 11:26:42 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:16.805 11:26:42 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:16.805 11:26:42 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:16.805 11:26:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:16.805 11:26:42 -- spdk/autotest.sh@32 -- # uname -s 00:03:16.805 11:26:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:16.805 11:26:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:16.805 11:26:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.805 11:26:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:16.805 11:26:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:16.805 11:26:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:16.805 11:26:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:16.805 11:26:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:16.805 11:26:42 -- spdk/autotest.sh@48 -- # udevadm_pid=809433 00:03:16.805 11:26:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:16.805 11:26:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:16.805 11:26:42 -- pm/common@17 -- # local monitor 00:03:16.805 11:26:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.805 11:26:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.805 11:26:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.805 11:26:42 -- pm/common@21 -- # date +%s 00:03:16.805 11:26:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:16.805 11:26:42 -- pm/common@21 -- # date +%s 00:03:16.805 11:26:42 -- pm/common@25 -- # sleep 1 00:03:16.805 11:26:42 -- pm/common@21 -- # date +%s 00:03:16.805 11:26:42 -- pm/common@21 -- # date +%s 00:03:16.805 11:26:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666402 00:03:16.805 11:26:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666402 00:03:16.805 11:26:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666402 00:03:16.805 11:26:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1731666402 00:03:16.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666402_collect-cpu-load.pm.log 00:03:16.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666402_collect-vmstat.pm.log 00:03:16.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666402_collect-cpu-temp.pm.log 00:03:16.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1731666402_collect-bmc-pm.bmc.pm.log 00:03:17.747 11:26:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:17.747 11:26:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:17.747 11:26:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:17.747 11:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:17.747 11:26:43 -- spdk/autotest.sh@59 -- # create_test_list 00:03:17.747 11:26:43 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:17.747 11:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:17.747 11:26:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:17.747 11:26:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.747 11:26:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.747 11:26:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:17.747 11:26:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:17.747 11:26:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:17.747 11:26:43 -- common/autotest_common.sh@1455 -- # uname 00:03:17.747 11:26:43 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:17.747 11:26:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:17.747 11:26:43 -- common/autotest_common.sh@1475 -- # uname 00:03:17.747 11:26:43 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:17.747 11:26:43 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:17.747 11:26:43 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:18.008 lcov: LCOV version 1.15 00:03:18.008 11:26:43 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:32.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:32.920 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:51.045 11:27:13 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:51.045 11:27:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:51.045 11:27:13 -- common/autotest_common.sh@10 -- # set +x 00:03:51.045 11:27:13 -- spdk/autotest.sh@78 -- # rm -f 00:03:51.045 11:27:13 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:51.617 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:51.617 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:51.617 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:51.617 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:51.617 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:51.877 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:51.877 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:52.138 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:52.138 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:52.398 11:27:17 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:52.398 11:27:17 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:52.398 11:27:17 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:52.398 11:27:17 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:52.399 11:27:17 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:52.399 11:27:17 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:52.399 11:27:17 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:52.399 11:27:17 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:52.399 11:27:17 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:52.399 11:27:17 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:52.399 11:27:17 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:52.399 11:27:17 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:52.399 11:27:17 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:52.399 11:27:17 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:52.399 11:27:17 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:52.399 No valid GPT data, bailing 00:03:52.399 11:27:17 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:52.399 11:27:17 -- scripts/common.sh@394 -- # pt= 00:03:52.399 11:27:17 -- scripts/common.sh@395 -- # return 1 00:03:52.399 11:27:17 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:52.399 1+0 records in 00:03:52.399 1+0 records out 00:03:52.399 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00194141 s, 540 MB/s 00:03:52.399 11:27:17 -- spdk/autotest.sh@105 -- # sync 00:03:52.399 11:27:17 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.399 11:27:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.399 11:27:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:02.398 11:27:26 -- spdk/autotest.sh@111 -- # uname -s 00:04:02.398 11:27:26 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:02.398 11:27:26 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:02.398 11:27:26 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:04.312 Hugepages 00:04:04.312 node hugesize free / total 00:04:04.312 node0 1048576kB 0 / 0 00:04:04.312 node0 2048kB 0 / 0 00:04:04.312 node1 1048576kB 0 / 0 00:04:04.312 node1 2048kB 0 / 0 00:04:04.312 00:04:04.312 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:04.312 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:04:04.573 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:04:04.573 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:04:04.573 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:04:04.573 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:04:04.573 11:27:30 -- spdk/autotest.sh@117 -- # uname -s 00:04:04.573 11:27:30 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:04.573 11:27:30 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:04.573 11:27:30 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:08.779 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:08.779 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:10.165 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:10.426 11:27:35 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:11.370 11:27:36 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:11.370 11:27:36 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:11.370 11:27:36 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.370 11:27:36 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:11.370 11:27:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:11.370 11:27:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:11.370 11:27:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.370 11:27:36 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:11.370 11:27:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:11.370 11:27:36 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:11.370 11:27:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:11.370 11:27:36 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.909 Waiting for block devices as requested 00:04:14.909 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:14.909 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:15.170 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:15.170 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:15.170 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:15.170 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:15.431 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:15.431 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:15.431 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:04:15.692 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:04:15.953 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:04:15.953 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:04:15.953 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:04:16.214 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:04:16.214 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:04:16.214 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:04:16.476 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:04:16.737 11:27:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:16.737 11:27:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:04:16.737 11:27:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:04:16.737 11:27:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:16.737 11:27:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:16.737 11:27:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:16.737 11:27:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:04:16.737 11:27:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:16.737 11:27:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:16.737 11:27:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:16.737 11:27:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:16.737 11:27:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:16.737 11:27:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:16.737 11:27:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:16.737 11:27:42 -- common/autotest_common.sh@1541 -- # continue 00:04:16.737 11:27:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:16.737 11:27:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:16.737 11:27:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.737 11:27:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:16.737 11:27:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:16.737 11:27:42 -- common/autotest_common.sh@10 -- # set +x 00:04:16.737 11:27:42 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:20.039 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:20.300 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:20.873 11:27:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:20.873 11:27:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:20.873 11:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.873 11:27:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:20.873 11:27:46 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:20.873 11:27:46 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:20.873 11:27:46 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:20.873 11:27:46 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:20.873 11:27:46 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:20.873 11:27:46 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:20.873 11:27:46 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:20.873 11:27:46 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:20.873 11:27:46 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:20.873 11:27:46 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.873 11:27:46 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.873 11:27:46 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:20.873 11:27:46 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:20.873 11:27:46 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:20.873 11:27:46 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:20.873 11:27:46 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:20.873 11:27:46 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:20.873 11:27:46 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:20.873 11:27:46 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:20.873 11:27:46 -- common/autotest_common.sh@1570 -- # return 0 00:04:20.873 11:27:46 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:20.874 11:27:46 -- common/autotest_common.sh@1578 -- # return 0 00:04:20.874 11:27:46 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:20.874 11:27:46 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:20.874 11:27:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.874 11:27:46 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:20.874 11:27:46 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:20.874 11:27:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:20.874 11:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.874 11:27:46 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:20.874 11:27:46 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:20.874 11:27:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:20.874 11:27:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:20.874 11:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:20.874 ************************************ 00:04:20.874 START TEST env 00:04:20.874 ************************************ 00:04:20.874 11:27:46 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:21.135 * Looking for test storage... 00:04:21.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:21.135 11:27:46 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:21.135 11:27:46 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:21.135 11:27:46 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:21.135 11:27:46 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:21.136 11:27:46 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:21.136 11:27:46 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:21.136 11:27:46 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:21.136 11:27:46 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:21.136 11:27:46 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:21.136 11:27:46 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:21.136 11:27:46 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:21.136 11:27:46 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:21.136 11:27:46 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:21.136 11:27:46 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:21.136 11:27:46 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:21.136 11:27:46 env -- scripts/common.sh@344 -- # case "$op" in 00:04:21.136 11:27:46 env -- scripts/common.sh@345 -- # : 1 00:04:21.136 11:27:46 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:21.136 11:27:46 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:21.136 11:27:46 env -- scripts/common.sh@365 -- # decimal 1 00:04:21.136 11:27:46 env -- scripts/common.sh@353 -- # local d=1 00:04:21.136 11:27:46 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:21.136 11:27:46 env -- scripts/common.sh@355 -- # echo 1 00:04:21.136 11:27:46 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:21.136 11:27:46 env -- scripts/common.sh@366 -- # decimal 2 00:04:21.136 11:27:46 env -- scripts/common.sh@353 -- # local d=2 00:04:21.136 11:27:46 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:21.136 11:27:46 env -- scripts/common.sh@355 -- # echo 2 00:04:21.136 11:27:46 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:21.136 11:27:46 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:21.136 11:27:46 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:21.136 11:27:46 env -- scripts/common.sh@368 -- # return 0 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:21.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.136 --rc genhtml_branch_coverage=1 00:04:21.136 --rc genhtml_function_coverage=1 00:04:21.136 --rc genhtml_legend=1 00:04:21.136 --rc geninfo_all_blocks=1 00:04:21.136 --rc geninfo_unexecuted_blocks=1 00:04:21.136 00:04:21.136 ' 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:21.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.136 --rc genhtml_branch_coverage=1 00:04:21.136 --rc genhtml_function_coverage=1 00:04:21.136 --rc genhtml_legend=1 00:04:21.136 --rc geninfo_all_blocks=1 00:04:21.136 --rc geninfo_unexecuted_blocks=1 00:04:21.136 00:04:21.136 ' 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:21.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.136 --rc genhtml_branch_coverage=1 00:04:21.136 --rc genhtml_function_coverage=1 00:04:21.136 --rc genhtml_legend=1 00:04:21.136 --rc geninfo_all_blocks=1 00:04:21.136 --rc geninfo_unexecuted_blocks=1 00:04:21.136 00:04:21.136 ' 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:21.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:21.136 --rc genhtml_branch_coverage=1 00:04:21.136 --rc genhtml_function_coverage=1 00:04:21.136 --rc genhtml_legend=1 00:04:21.136 --rc geninfo_all_blocks=1 00:04:21.136 --rc geninfo_unexecuted_blocks=1 00:04:21.136 00:04:21.136 ' 00:04:21.136 11:27:46 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.136 11:27:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.136 11:27:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.136 ************************************ 00:04:21.136 START TEST env_memory 00:04:21.136 ************************************ 00:04:21.136 11:27:46 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:21.136 00:04:21.136 00:04:21.136 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.136 http://cunit.sourceforge.net/ 00:04:21.136 00:04:21.136 00:04:21.136 Suite: memory 00:04:21.398 Test: alloc and free memory map ...[2024-11-15 11:27:46.639772] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:21.398 passed 00:04:21.398 Test: mem map translation ...[2024-11-15 11:27:46.665386] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:21.398 [2024-11-15 11:27:46.665416] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:21.398 [2024-11-15 11:27:46.665462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:21.398 [2024-11-15 11:27:46.665470] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:21.398 passed 00:04:21.398 Test: mem map registration ...[2024-11-15 11:27:46.720716] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:21.398 [2024-11-15 11:27:46.720760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:21.398 passed 00:04:21.398 Test: mem map adjacent registrations ...passed 00:04:21.398 00:04:21.398 Run Summary: Type Total Ran Passed Failed Inactive 00:04:21.398 suites 1 1 n/a 0 0 00:04:21.398 tests 4 4 4 0 0 00:04:21.398 asserts 152 152 152 0 n/a 00:04:21.398 00:04:21.398 Elapsed time = 0.192 seconds 00:04:21.398 00:04:21.398 real 0m0.206s 00:04:21.398 user 0m0.193s 00:04:21.398 sys 0m0.012s 00:04:21.398 11:27:46 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:21.398 11:27:46 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:21.398 ************************************ 00:04:21.398 END TEST env_memory 00:04:21.398 ************************************ 00:04:21.398 11:27:46 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:21.398 11:27:46 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:21.398 11:27:46 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:21.398 11:27:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:21.398 ************************************ 00:04:21.398 START TEST env_vtophys 00:04:21.398 ************************************ 00:04:21.398 11:27:46 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:21.660 EAL: lib.eal log level changed from notice to debug 00:04:21.660 EAL: Detected lcore 0 as core 0 on socket 0 00:04:21.660 EAL: Detected lcore 1 as core 1 on socket 0 00:04:21.660 EAL: Detected lcore 2 as core 2 on socket 0 00:04:21.660 EAL: Detected lcore 3 as core 3 on socket 0 00:04:21.660 EAL: Detected lcore 4 as core 4 on socket 0 00:04:21.660 EAL: Detected lcore 5 as core 5 on socket 0 00:04:21.660 EAL: Detected lcore 6 as core 6 on socket 0 00:04:21.660 EAL: Detected lcore 7 as core 7 on socket 0 00:04:21.660 EAL: Detected lcore 8 as core 8 on socket 0 00:04:21.660 EAL: Detected lcore 9 as core 9 on socket 0 00:04:21.660 EAL: Detected lcore 10 as core 10 on socket 0 00:04:21.660 EAL: Detected lcore 11 as core 11 on socket 0 00:04:21.660 EAL: Detected lcore 12 as core 12 on socket 0 00:04:21.660 EAL: Detected lcore 13 as core 13 on socket 0 00:04:21.660 EAL: Detected lcore 14 as core 14 on socket 0 00:04:21.660 EAL: Detected lcore 15 as core 15 on socket 0 00:04:21.660 EAL: Detected lcore 16 as core 16 on socket 0 00:04:21.660 EAL: Detected lcore 17 as core 17 on socket 0 00:04:21.660 EAL: Detected lcore 18 as core 18 on socket 0 00:04:21.660 EAL: Detected lcore 19 as core 19 on socket 0 00:04:21.660 EAL: Detected lcore 20 as core 20 on socket 0 00:04:21.660 EAL: Detected lcore 21 as core 21 on socket 0 00:04:21.660 EAL: Detected lcore 22 as core 22 on socket 0 00:04:21.660 EAL: Detected lcore 23 as core 23 on socket 0 00:04:21.660 EAL: Detected lcore 24 as core 24 on socket 0 00:04:21.660 EAL: Detected lcore 25 as core 25 on socket 0 00:04:21.660 EAL: Detected lcore 26 as core 26 on socket 0 00:04:21.660 EAL: Detected lcore 27 as core 27 on socket 0 00:04:21.660 EAL: Detected lcore 28 as core 28 on socket 0 00:04:21.660 EAL: Detected lcore 29 as core 29 on socket 0 00:04:21.660 EAL: Detected lcore 30 as core 30 on socket 0 00:04:21.660 EAL: Detected lcore 31 as core 31 on socket 0 00:04:21.660 EAL: Detected lcore 32 as core 32 on socket 0 00:04:21.660 EAL: Detected lcore 33 as core 33 on socket 0 00:04:21.660 EAL: Detected lcore 34 as core 34 on socket 0 00:04:21.660 EAL: Detected lcore 35 as core 35 on socket 0 00:04:21.660 EAL: Detected lcore 36 as core 0 on socket 1 00:04:21.660 EAL: Detected lcore 37 as core 1 on socket 1 00:04:21.660 EAL: Detected lcore 38 as core 2 on socket 1 00:04:21.660 EAL: Detected lcore 39 as core 3 on socket 1 00:04:21.660 EAL: Detected lcore 40 as core 4 on socket 1 00:04:21.660 EAL: Detected lcore 41 as core 5 on socket 1 00:04:21.660 EAL: Detected lcore 42 as core 6 on socket 1 00:04:21.660 EAL: Detected lcore 43 as core 7 on socket 1 00:04:21.660 EAL: Detected lcore 44 as core 8 on socket 1 00:04:21.660 EAL: Detected lcore 45 as core 9 on socket 1 00:04:21.660 EAL: Detected lcore 46 as core 10 on socket 1 00:04:21.660 EAL: Detected lcore 47 as core 11 on socket 1 00:04:21.660 EAL: Detected lcore 48 as core 12 on socket 1 00:04:21.660 EAL: Detected lcore 49 as core 13 on socket 1 00:04:21.660 EAL: Detected lcore 50 as core 14 on socket 1 00:04:21.660 EAL: Detected lcore 51 as core 15 on socket 1 00:04:21.660 EAL: Detected lcore 52 as core 16 on socket 1 00:04:21.661 EAL: Detected lcore 53 as core 17 on socket 1 00:04:21.661 EAL: Detected lcore 54 as core 18 on socket 1 00:04:21.661 EAL: Detected lcore 55 as core 19 on socket 1 00:04:21.661 EAL: Detected lcore 56 as core 20 on socket 1 00:04:21.661 EAL: Detected lcore 57 as core 21 on socket 1 00:04:21.661 EAL: Detected lcore 58 as core 22 on socket 1 00:04:21.661 EAL: Detected lcore 59 as core 23 on socket 1 00:04:21.661 EAL: Detected lcore 60 as core 24 on socket 1 00:04:21.661 EAL: Detected lcore 61 as core 25 on socket 1 00:04:21.661 EAL: Detected lcore 62 as core 26 on socket 1 00:04:21.661 EAL: Detected lcore 63 as core 27 on socket 1 00:04:21.661 EAL: Detected lcore 64 as core 28 on socket 1 00:04:21.661 EAL: Detected lcore 65 as core 29 on socket 1 00:04:21.661 EAL: Detected lcore 66 as core 30 on socket 1 00:04:21.661 EAL: Detected lcore 67 as core 31 on socket 1 00:04:21.661 EAL: Detected lcore 68 as core 32 on socket 1 00:04:21.661 EAL: Detected lcore 69 as core 33 on socket 1 00:04:21.661 EAL: Detected lcore 70 as core 34 on socket 1 00:04:21.661 EAL: Detected lcore 71 as core 35 on socket 1 00:04:21.661 EAL: Detected lcore 72 as core 0 on socket 0 00:04:21.661 EAL: Detected lcore 73 as core 1 on socket 0 00:04:21.661 EAL: Detected lcore 74 as core 2 on socket 0 00:04:21.661 EAL: Detected lcore 75 as core 3 on socket 0 00:04:21.661 EAL: Detected lcore 76 as core 4 on socket 0 00:04:21.661 EAL: Detected lcore 77 as core 5 on socket 0 00:04:21.661 EAL: Detected lcore 78 as core 6 on socket 0 00:04:21.661 EAL: Detected lcore 79 as core 7 on socket 0 00:04:21.661 EAL: Detected lcore 80 as core 8 on socket 0 00:04:21.661 EAL: Detected lcore 81 as core 9 on socket 0 00:04:21.661 EAL: Detected lcore 82 as core 10 on socket 0 00:04:21.661 EAL: Detected lcore 83 as core 11 on socket 0 00:04:21.661 EAL: Detected lcore 84 as core 12 on socket 0 00:04:21.661 EAL: Detected lcore 85 as core 13 on socket 0 00:04:21.661 EAL: Detected lcore 86 as core 14 on socket 0 00:04:21.661 EAL: Detected lcore 87 as core 15 on socket 0 00:04:21.661 EAL: Detected lcore 88 as core 16 on socket 0 00:04:21.661 EAL: Detected lcore 89 as core 17 on socket 0 00:04:21.661 EAL: Detected lcore 90 as core 18 on socket 0 00:04:21.661 EAL: Detected lcore 91 as core 19 on socket 0 00:04:21.661 EAL: Detected lcore 92 as core 20 on socket 0 00:04:21.661 EAL: Detected lcore 93 as core 21 on socket 0 00:04:21.661 EAL: Detected lcore 94 as core 22 on socket 0 00:04:21.661 EAL: Detected lcore 95 as core 23 on socket 0 00:04:21.661 EAL: Detected lcore 96 as core 24 on socket 0 00:04:21.661 EAL: Detected lcore 97 as core 25 on socket 0 00:04:21.661 EAL: Detected lcore 98 as core 26 on socket 0 00:04:21.661 EAL: Detected lcore 99 as core 27 on socket 0 00:04:21.661 EAL: Detected lcore 100 as core 28 on socket 0 00:04:21.661 EAL: Detected lcore 101 as core 29 on socket 0 00:04:21.661 EAL: Detected lcore 102 as core 30 on socket 0 00:04:21.661 EAL: Detected lcore 103 as core 31 on socket 0 00:04:21.661 EAL: Detected lcore 104 as core 32 on socket 0 00:04:21.661 EAL: Detected lcore 105 as core 33 on socket 0 00:04:21.661 EAL: Detected lcore 106 as core 34 on socket 0 00:04:21.661 EAL: Detected lcore 107 as core 35 on socket 0 00:04:21.661 EAL: Detected lcore 108 as core 0 on socket 1 00:04:21.661 EAL: Detected lcore 109 as core 1 on socket 1 00:04:21.661 EAL: Detected lcore 110 as core 2 on socket 1 00:04:21.661 EAL: Detected lcore 111 as core 3 on socket 1 00:04:21.661 EAL: Detected lcore 112 as core 4 on socket 1 00:04:21.661 EAL: Detected lcore 113 as core 5 on socket 1 00:04:21.661 EAL: Detected lcore 114 as core 6 on socket 1 00:04:21.661 EAL: Detected lcore 115 as core 7 on socket 1 00:04:21.661 EAL: Detected lcore 116 as core 8 on socket 1 00:04:21.661 EAL: Detected lcore 117 as core 9 on socket 1 00:04:21.661 EAL: Detected lcore 118 as core 10 on socket 1 00:04:21.661 EAL: Detected lcore 119 as core 11 on socket 1 00:04:21.661 EAL: Detected lcore 120 as core 12 on socket 1 00:04:21.661 EAL: Detected lcore 121 as core 13 on socket 1 00:04:21.661 EAL: Detected lcore 122 as core 14 on socket 1 00:04:21.661 EAL: Detected lcore 123 as core 15 on socket 1 00:04:21.661 EAL: Detected lcore 124 as core 16 on socket 1 00:04:21.661 EAL: Detected lcore 125 as core 17 on socket 1 00:04:21.661 EAL: Detected lcore 126 as core 18 on socket 1 00:04:21.661 EAL: Detected lcore 127 as core 19 on socket 1 00:04:21.661 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:21.661 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:21.661 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:21.661 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:21.661 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:21.661 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:21.661 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:21.661 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:21.661 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:21.661 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:21.661 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:21.661 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:21.661 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:21.661 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:21.661 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:21.661 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:21.661 EAL: Maximum logical cores by configuration: 128 00:04:21.661 EAL: Detected CPU lcores: 128 00:04:21.661 EAL: Detected NUMA nodes: 2 00:04:21.661 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:21.661 EAL: Detected shared linkage of DPDK 00:04:21.661 EAL: No shared files mode enabled, IPC will be disabled 00:04:21.661 EAL: Bus pci wants IOVA as 'DC' 00:04:21.661 EAL: Buses did not request a specific IOVA mode. 00:04:21.661 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:21.661 EAL: Selected IOVA mode 'VA' 00:04:21.661 EAL: Probing VFIO support... 00:04:21.661 EAL: IOMMU type 1 (Type 1) is supported 00:04:21.661 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:21.661 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:21.661 EAL: VFIO support initialized 00:04:21.661 EAL: Ask a virtual area of 0x2e000 bytes 00:04:21.661 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:21.661 EAL: Setting up physically contiguous memory... 00:04:21.661 EAL: Setting maximum number of open files to 524288 00:04:21.661 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:21.661 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:21.661 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:21.661 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:21.661 EAL: Ask a virtual area of 0x61000 bytes 00:04:21.661 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:21.661 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:21.661 EAL: Ask a virtual area of 0x400000000 bytes 00:04:21.661 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:21.661 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:21.661 EAL: Hugepages will be freed exactly as allocated. 00:04:21.661 EAL: No shared files mode enabled, IPC is disabled 00:04:21.661 EAL: No shared files mode enabled, IPC is disabled 00:04:21.661 EAL: TSC frequency is ~2400000 KHz 00:04:21.661 EAL: Main lcore 0 is ready (tid=7fbe57b4ca00;cpuset=[0]) 00:04:21.661 EAL: Trying to obtain current memory policy. 00:04:21.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.661 EAL: Restoring previous memory policy: 0 00:04:21.661 EAL: request: mp_malloc_sync 00:04:21.661 EAL: No shared files mode enabled, IPC is disabled 00:04:21.661 EAL: Heap on socket 0 was expanded by 2MB 00:04:21.661 EAL: No shared files mode enabled, IPC is disabled 00:04:21.661 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:21.661 EAL: Mem event callback 'spdk:(nil)' registered 00:04:21.661 00:04:21.661 00:04:21.661 CUnit - A unit testing framework for C - Version 2.1-3 00:04:21.661 http://cunit.sourceforge.net/ 00:04:21.661 00:04:21.661 00:04:21.661 Suite: components_suite 00:04:21.661 Test: vtophys_malloc_test ...passed 00:04:21.661 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 130MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was shrunk by 130MB 00:04:21.662 EAL: Trying to obtain current memory policy. 00:04:21.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.662 EAL: Restoring previous memory policy: 4 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.662 EAL: request: mp_malloc_sync 00:04:21.662 EAL: No shared files mode enabled, IPC is disabled 00:04:21.662 EAL: Heap on socket 0 was expanded by 258MB 00:04:21.662 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.923 EAL: request: mp_malloc_sync 00:04:21.923 EAL: No shared files mode enabled, IPC is disabled 00:04:21.923 EAL: Heap on socket 0 was shrunk by 258MB 00:04:21.923 EAL: Trying to obtain current memory policy. 00:04:21.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.923 EAL: Restoring previous memory policy: 4 00:04:21.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.923 EAL: request: mp_malloc_sync 00:04:21.923 EAL: No shared files mode enabled, IPC is disabled 00:04:21.923 EAL: Heap on socket 0 was expanded by 514MB 00:04:21.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.923 EAL: request: mp_malloc_sync 00:04:21.923 EAL: No shared files mode enabled, IPC is disabled 00:04:21.923 EAL: Heap on socket 0 was shrunk by 514MB 00:04:21.923 EAL: Trying to obtain current memory policy. 00:04:21.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.185 EAL: Restoring previous memory policy: 4 00:04:22.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.185 EAL: request: mp_malloc_sync 00:04:22.185 EAL: No shared files mode enabled, IPC is disabled 00:04:22.185 EAL: Heap on socket 0 was expanded by 1026MB 00:04:22.185 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.447 EAL: request: mp_malloc_sync 00:04:22.447 EAL: No shared files mode enabled, IPC is disabled 00:04:22.447 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:22.447 passed 00:04:22.447 00:04:22.447 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.447 suites 1 1 n/a 0 0 00:04:22.447 tests 2 2 2 0 0 00:04:22.447 asserts 497 497 497 0 n/a 00:04:22.447 00:04:22.447 Elapsed time = 0.687 seconds 00:04:22.447 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.447 EAL: request: mp_malloc_sync 00:04:22.447 EAL: No shared files mode enabled, IPC is disabled 00:04:22.447 EAL: Heap on socket 0 was shrunk by 2MB 00:04:22.447 EAL: No shared files mode enabled, IPC is disabled 00:04:22.447 EAL: No shared files mode enabled, IPC is disabled 00:04:22.447 EAL: No shared files mode enabled, IPC is disabled 00:04:22.447 00:04:22.447 real 0m0.843s 00:04:22.447 user 0m0.440s 00:04:22.447 sys 0m0.371s 00:04:22.447 11:27:47 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.447 11:27:47 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:22.447 ************************************ 00:04:22.447 END TEST env_vtophys 00:04:22.447 ************************************ 00:04:22.447 11:27:47 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.447 11:27:47 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.447 11:27:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.447 11:27:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.447 ************************************ 00:04:22.447 START TEST env_pci 00:04:22.447 ************************************ 00:04:22.447 11:27:47 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:22.447 00:04:22.447 00:04:22.447 CUnit - A unit testing framework for C - Version 2.1-3 00:04:22.447 http://cunit.sourceforge.net/ 00:04:22.447 00:04:22.447 00:04:22.447 Suite: pci 00:04:22.447 Test: pci_hook ...[2024-11-15 11:27:47.813069] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 829445 has claimed it 00:04:22.447 EAL: Cannot find device (10000:00:01.0) 00:04:22.447 EAL: Failed to attach device on primary process 00:04:22.447 passed 00:04:22.447 00:04:22.447 Run Summary: Type Total Ran Passed Failed Inactive 00:04:22.447 suites 1 1 n/a 0 0 00:04:22.447 tests 1 1 1 0 0 00:04:22.447 asserts 25 25 25 0 n/a 00:04:22.447 00:04:22.447 Elapsed time = 0.031 seconds 00:04:22.447 00:04:22.447 real 0m0.053s 00:04:22.447 user 0m0.015s 00:04:22.447 sys 0m0.038s 00:04:22.447 11:27:47 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.447 11:27:47 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:22.447 ************************************ 00:04:22.447 END TEST env_pci 00:04:22.447 ************************************ 00:04:22.447 11:27:47 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:22.447 11:27:47 env -- env/env.sh@15 -- # uname 00:04:22.447 11:27:47 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:22.447 11:27:47 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:22.447 11:27:47 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.447 11:27:47 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:22.447 11:27:47 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.447 11:27:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:22.447 ************************************ 00:04:22.447 START TEST env_dpdk_post_init 00:04:22.447 ************************************ 00:04:22.447 11:27:47 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:22.709 EAL: Detected CPU lcores: 128 00:04:22.709 EAL: Detected NUMA nodes: 2 00:04:22.709 EAL: Detected shared linkage of DPDK 00:04:22.709 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:22.709 EAL: Selected IOVA mode 'VA' 00:04:22.709 EAL: VFIO support initialized 00:04:22.709 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:22.709 EAL: Using IOMMU type 1 (Type 1) 00:04:22.709 EAL: Ignore mapping IO port bar(1) 00:04:22.970 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:22.970 EAL: Ignore mapping IO port bar(1) 00:04:23.231 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:23.231 EAL: Ignore mapping IO port bar(1) 00:04:23.493 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:23.493 EAL: Ignore mapping IO port bar(1) 00:04:23.493 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:23.754 EAL: Ignore mapping IO port bar(1) 00:04:23.754 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:24.015 EAL: Ignore mapping IO port bar(1) 00:04:24.015 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:24.276 EAL: Ignore mapping IO port bar(1) 00:04:24.276 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:24.536 EAL: Ignore mapping IO port bar(1) 00:04:24.536 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:24.798 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:24.798 EAL: Ignore mapping IO port bar(1) 00:04:25.058 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:25.058 EAL: Ignore mapping IO port bar(1) 00:04:25.058 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:25.319 EAL: Ignore mapping IO port bar(1) 00:04:25.319 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:25.580 EAL: Ignore mapping IO port bar(1) 00:04:25.580 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:25.841 EAL: Ignore mapping IO port bar(1) 00:04:25.841 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:26.102 EAL: Ignore mapping IO port bar(1) 00:04:26.102 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:26.102 EAL: Ignore mapping IO port bar(1) 00:04:26.364 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:26.364 EAL: Ignore mapping IO port bar(1) 00:04:26.624 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:26.624 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:26.624 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:26.624 Starting DPDK initialization... 00:04:26.624 Starting SPDK post initialization... 00:04:26.624 SPDK NVMe probe 00:04:26.624 Attaching to 0000:65:00.0 00:04:26.624 Attached to 0000:65:00.0 00:04:26.624 Cleaning up... 00:04:28.539 00:04:28.539 real 0m5.751s 00:04:28.539 user 0m0.116s 00:04:28.539 sys 0m0.187s 00:04:28.539 11:27:53 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.539 11:27:53 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:28.539 ************************************ 00:04:28.539 END TEST env_dpdk_post_init 00:04:28.539 ************************************ 00:04:28.539 11:27:53 env -- env/env.sh@26 -- # uname 00:04:28.539 11:27:53 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:28.539 11:27:53 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.539 11:27:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.539 11:27:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.539 11:27:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.539 ************************************ 00:04:28.539 START TEST env_mem_callbacks 00:04:28.539 ************************************ 00:04:28.539 11:27:53 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:28.539 EAL: Detected CPU lcores: 128 00:04:28.539 EAL: Detected NUMA nodes: 2 00:04:28.539 EAL: Detected shared linkage of DPDK 00:04:28.539 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:28.539 EAL: Selected IOVA mode 'VA' 00:04:28.539 EAL: VFIO support initialized 00:04:28.539 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:28.539 00:04:28.539 00:04:28.539 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.539 http://cunit.sourceforge.net/ 00:04:28.539 00:04:28.539 00:04:28.539 Suite: memory 00:04:28.539 Test: test ... 00:04:28.539 register 0x200000200000 2097152 00:04:28.539 malloc 3145728 00:04:28.539 register 0x200000400000 4194304 00:04:28.539 buf 0x200000500000 len 3145728 PASSED 00:04:28.539 malloc 64 00:04:28.539 buf 0x2000004fff40 len 64 PASSED 00:04:28.539 malloc 4194304 00:04:28.539 register 0x200000800000 6291456 00:04:28.539 buf 0x200000a00000 len 4194304 PASSED 00:04:28.539 free 0x200000500000 3145728 00:04:28.539 free 0x2000004fff40 64 00:04:28.539 unregister 0x200000400000 4194304 PASSED 00:04:28.539 free 0x200000a00000 4194304 00:04:28.539 unregister 0x200000800000 6291456 PASSED 00:04:28.539 malloc 8388608 00:04:28.539 register 0x200000400000 10485760 00:04:28.539 buf 0x200000600000 len 8388608 PASSED 00:04:28.539 free 0x200000600000 8388608 00:04:28.539 unregister 0x200000400000 10485760 PASSED 00:04:28.539 passed 00:04:28.539 00:04:28.539 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.539 suites 1 1 n/a 0 0 00:04:28.539 tests 1 1 1 0 0 00:04:28.539 asserts 15 15 15 0 n/a 00:04:28.539 00:04:28.539 Elapsed time = 0.010 seconds 00:04:28.539 00:04:28.539 real 0m0.068s 00:04:28.539 user 0m0.023s 00:04:28.539 sys 0m0.045s 00:04:28.539 11:27:53 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.539 11:27:53 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:28.539 ************************************ 00:04:28.539 END TEST env_mem_callbacks 00:04:28.539 ************************************ 00:04:28.539 00:04:28.539 real 0m7.537s 00:04:28.539 user 0m1.042s 00:04:28.539 sys 0m1.048s 00:04:28.539 11:27:53 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.539 11:27:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.539 ************************************ 00:04:28.539 END TEST env 00:04:28.539 ************************************ 00:04:28.539 11:27:53 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.539 11:27:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.539 11:27:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.539 11:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:28.539 ************************************ 00:04:28.539 START TEST rpc 00:04:28.539 ************************************ 00:04:28.539 11:27:53 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:28.800 * Looking for test storage... 00:04:28.800 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:28.800 11:27:54 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:28.800 11:27:54 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:28.800 11:27:54 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:28.801 11:27:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:28.801 11:27:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:28.801 11:27:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:28.801 11:27:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:28.801 11:27:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:28.801 11:27:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:28.801 11:27:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:28.801 11:27:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:28.801 11:27:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:28.801 11:27:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:28.801 11:27:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:28.801 11:27:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:28.801 11:27:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:28.801 11:27:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:28.801 11:27:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:28.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.801 --rc genhtml_branch_coverage=1 00:04:28.801 --rc genhtml_function_coverage=1 00:04:28.801 --rc genhtml_legend=1 00:04:28.801 --rc geninfo_all_blocks=1 00:04:28.801 --rc geninfo_unexecuted_blocks=1 00:04:28.801 00:04:28.801 ' 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:28.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.801 --rc genhtml_branch_coverage=1 00:04:28.801 --rc genhtml_function_coverage=1 00:04:28.801 --rc genhtml_legend=1 00:04:28.801 --rc geninfo_all_blocks=1 00:04:28.801 --rc geninfo_unexecuted_blocks=1 00:04:28.801 00:04:28.801 ' 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:28.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.801 --rc genhtml_branch_coverage=1 00:04:28.801 --rc genhtml_function_coverage=1 00:04:28.801 --rc genhtml_legend=1 00:04:28.801 --rc geninfo_all_blocks=1 00:04:28.801 --rc geninfo_unexecuted_blocks=1 00:04:28.801 00:04:28.801 ' 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:28.801 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:28.801 --rc genhtml_branch_coverage=1 00:04:28.801 --rc genhtml_function_coverage=1 00:04:28.801 --rc genhtml_legend=1 00:04:28.801 --rc geninfo_all_blocks=1 00:04:28.801 --rc geninfo_unexecuted_blocks=1 00:04:28.801 00:04:28.801 ' 00:04:28.801 11:27:54 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:28.801 11:27:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=830750 00:04:28.801 11:27:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:28.801 11:27:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 830750 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@833 -- # '[' -z 830750 ']' 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:28.801 11:27:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.801 [2024-11-15 11:27:54.214361] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:28.801 [2024-11-15 11:27:54.214424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid830750 ] 00:04:29.061 [2024-11-15 11:27:54.307148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.061 [2024-11-15 11:27:54.359420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:29.061 [2024-11-15 11:27:54.359473] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 830750' to capture a snapshot of events at runtime. 00:04:29.061 [2024-11-15 11:27:54.359482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:29.061 [2024-11-15 11:27:54.359489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:29.061 [2024-11-15 11:27:54.359495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid830750 for offline analysis/debug. 00:04:29.061 [2024-11-15 11:27:54.360280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.633 11:27:55 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:29.633 11:27:55 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:29.633 11:27:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.633 11:27:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:29.633 11:27:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:29.633 11:27:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:29.633 11:27:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.633 11:27:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.633 11:27:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.633 ************************************ 00:04:29.633 START TEST rpc_integrity 00:04:29.633 ************************************ 00:04:29.633 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:29.633 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.633 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.633 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.633 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.633 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.633 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.894 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.894 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.894 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.894 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.894 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.894 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:29.894 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.894 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.894 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.894 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.894 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.894 { 00:04:29.894 "name": "Malloc0", 00:04:29.894 "aliases": [ 00:04:29.894 "aca0afa8-409f-4958-a62c-e1a0099f79b2" 00:04:29.894 ], 00:04:29.894 "product_name": "Malloc disk", 00:04:29.894 "block_size": 512, 00:04:29.894 "num_blocks": 16384, 00:04:29.894 "uuid": "aca0afa8-409f-4958-a62c-e1a0099f79b2", 00:04:29.894 "assigned_rate_limits": { 00:04:29.894 "rw_ios_per_sec": 0, 00:04:29.894 "rw_mbytes_per_sec": 0, 00:04:29.894 "r_mbytes_per_sec": 0, 00:04:29.894 "w_mbytes_per_sec": 0 00:04:29.894 }, 00:04:29.894 "claimed": false, 00:04:29.894 "zoned": false, 00:04:29.895 "supported_io_types": { 00:04:29.895 "read": true, 00:04:29.895 "write": true, 00:04:29.895 "unmap": true, 00:04:29.895 "flush": true, 00:04:29.895 "reset": true, 00:04:29.895 "nvme_admin": false, 00:04:29.895 "nvme_io": false, 00:04:29.895 "nvme_io_md": false, 00:04:29.895 "write_zeroes": true, 00:04:29.895 "zcopy": true, 00:04:29.895 "get_zone_info": false, 00:04:29.895 "zone_management": false, 00:04:29.895 "zone_append": false, 00:04:29.895 "compare": false, 00:04:29.895 "compare_and_write": false, 00:04:29.895 "abort": true, 00:04:29.895 "seek_hole": false, 00:04:29.895 "seek_data": false, 00:04:29.895 "copy": true, 00:04:29.895 "nvme_iov_md": false 00:04:29.895 }, 00:04:29.895 "memory_domains": [ 00:04:29.895 { 00:04:29.895 "dma_device_id": "system", 00:04:29.895 "dma_device_type": 1 00:04:29.895 }, 00:04:29.895 { 00:04:29.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.895 "dma_device_type": 2 00:04:29.895 } 00:04:29.895 ], 00:04:29.895 "driver_specific": {} 00:04:29.895 } 00:04:29.895 ]' 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.895 [2024-11-15 11:27:55.233333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:29.895 [2024-11-15 11:27:55.233380] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.895 [2024-11-15 11:27:55.233396] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x151f800 00:04:29.895 [2024-11-15 11:27:55.233404] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.895 [2024-11-15 11:27:55.234968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.895 [2024-11-15 11:27:55.235004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.895 Passthru0 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.895 { 00:04:29.895 "name": "Malloc0", 00:04:29.895 "aliases": [ 00:04:29.895 "aca0afa8-409f-4958-a62c-e1a0099f79b2" 00:04:29.895 ], 00:04:29.895 "product_name": "Malloc disk", 00:04:29.895 "block_size": 512, 00:04:29.895 "num_blocks": 16384, 00:04:29.895 "uuid": "aca0afa8-409f-4958-a62c-e1a0099f79b2", 00:04:29.895 "assigned_rate_limits": { 00:04:29.895 "rw_ios_per_sec": 0, 00:04:29.895 "rw_mbytes_per_sec": 0, 00:04:29.895 "r_mbytes_per_sec": 0, 00:04:29.895 "w_mbytes_per_sec": 0 00:04:29.895 }, 00:04:29.895 "claimed": true, 00:04:29.895 "claim_type": "exclusive_write", 00:04:29.895 "zoned": false, 00:04:29.895 "supported_io_types": { 00:04:29.895 "read": true, 00:04:29.895 "write": true, 00:04:29.895 "unmap": true, 00:04:29.895 "flush": true, 00:04:29.895 "reset": true, 00:04:29.895 "nvme_admin": false, 00:04:29.895 "nvme_io": false, 00:04:29.895 "nvme_io_md": false, 00:04:29.895 "write_zeroes": true, 00:04:29.895 "zcopy": true, 00:04:29.895 "get_zone_info": false, 00:04:29.895 "zone_management": false, 00:04:29.895 "zone_append": false, 00:04:29.895 "compare": false, 00:04:29.895 "compare_and_write": false, 00:04:29.895 "abort": true, 00:04:29.895 "seek_hole": false, 00:04:29.895 "seek_data": false, 00:04:29.895 "copy": true, 00:04:29.895 "nvme_iov_md": false 00:04:29.895 }, 00:04:29.895 "memory_domains": [ 00:04:29.895 { 00:04:29.895 "dma_device_id": "system", 00:04:29.895 "dma_device_type": 1 00:04:29.895 }, 00:04:29.895 { 00:04:29.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.895 "dma_device_type": 2 00:04:29.895 } 00:04:29.895 ], 00:04:29.895 "driver_specific": {} 00:04:29.895 }, 00:04:29.895 { 00:04:29.895 "name": "Passthru0", 00:04:29.895 "aliases": [ 00:04:29.895 "aabb7db6-28fd-5479-8be5-b59e5cb1b8d3" 00:04:29.895 ], 00:04:29.895 "product_name": "passthru", 00:04:29.895 "block_size": 512, 00:04:29.895 "num_blocks": 16384, 00:04:29.895 "uuid": "aabb7db6-28fd-5479-8be5-b59e5cb1b8d3", 00:04:29.895 "assigned_rate_limits": { 00:04:29.895 "rw_ios_per_sec": 0, 00:04:29.895 "rw_mbytes_per_sec": 0, 00:04:29.895 "r_mbytes_per_sec": 0, 00:04:29.895 "w_mbytes_per_sec": 0 00:04:29.895 }, 00:04:29.895 "claimed": false, 00:04:29.895 "zoned": false, 00:04:29.895 "supported_io_types": { 00:04:29.895 "read": true, 00:04:29.895 "write": true, 00:04:29.895 "unmap": true, 00:04:29.895 "flush": true, 00:04:29.895 "reset": true, 00:04:29.895 "nvme_admin": false, 00:04:29.895 "nvme_io": false, 00:04:29.895 "nvme_io_md": false, 00:04:29.895 "write_zeroes": true, 00:04:29.895 "zcopy": true, 00:04:29.895 "get_zone_info": false, 00:04:29.895 "zone_management": false, 00:04:29.895 "zone_append": false, 00:04:29.895 "compare": false, 00:04:29.895 "compare_and_write": false, 00:04:29.895 "abort": true, 00:04:29.895 "seek_hole": false, 00:04:29.895 "seek_data": false, 00:04:29.895 "copy": true, 00:04:29.895 "nvme_iov_md": false 00:04:29.895 }, 00:04:29.895 "memory_domains": [ 00:04:29.895 { 00:04:29.895 "dma_device_id": "system", 00:04:29.895 "dma_device_type": 1 00:04:29.895 }, 00:04:29.895 { 00:04:29.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.895 "dma_device_type": 2 00:04:29.895 } 00:04:29.895 ], 00:04:29.895 "driver_specific": { 00:04:29.895 "passthru": { 00:04:29.895 "name": "Passthru0", 00:04:29.895 "base_bdev_name": "Malloc0" 00:04:29.895 } 00:04:29.895 } 00:04:29.895 } 00:04:29.895 ]' 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.895 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.895 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.156 11:27:55 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.156 00:04:30.156 real 0m0.310s 00:04:30.156 user 0m0.190s 00:04:30.156 sys 0m0.048s 00:04:30.156 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.156 11:27:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 ************************************ 00:04:30.156 END TEST rpc_integrity 00:04:30.156 ************************************ 00:04:30.156 11:27:55 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:30.156 11:27:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.156 11:27:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.156 11:27:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 ************************************ 00:04:30.156 START TEST rpc_plugins 00:04:30.156 ************************************ 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:30.156 { 00:04:30.156 "name": "Malloc1", 00:04:30.156 "aliases": [ 00:04:30.156 "ca42fbb6-3cbb-47a9-b503-b89f6c5dafbb" 00:04:30.156 ], 00:04:30.156 "product_name": "Malloc disk", 00:04:30.156 "block_size": 4096, 00:04:30.156 "num_blocks": 256, 00:04:30.156 "uuid": "ca42fbb6-3cbb-47a9-b503-b89f6c5dafbb", 00:04:30.156 "assigned_rate_limits": { 00:04:30.156 "rw_ios_per_sec": 0, 00:04:30.156 "rw_mbytes_per_sec": 0, 00:04:30.156 "r_mbytes_per_sec": 0, 00:04:30.156 "w_mbytes_per_sec": 0 00:04:30.156 }, 00:04:30.156 "claimed": false, 00:04:30.156 "zoned": false, 00:04:30.156 "supported_io_types": { 00:04:30.156 "read": true, 00:04:30.156 "write": true, 00:04:30.156 "unmap": true, 00:04:30.156 "flush": true, 00:04:30.156 "reset": true, 00:04:30.156 "nvme_admin": false, 00:04:30.156 "nvme_io": false, 00:04:30.156 "nvme_io_md": false, 00:04:30.156 "write_zeroes": true, 00:04:30.156 "zcopy": true, 00:04:30.156 "get_zone_info": false, 00:04:30.156 "zone_management": false, 00:04:30.156 "zone_append": false, 00:04:30.156 "compare": false, 00:04:30.156 "compare_and_write": false, 00:04:30.156 "abort": true, 00:04:30.156 "seek_hole": false, 00:04:30.156 "seek_data": false, 00:04:30.156 "copy": true, 00:04:30.156 "nvme_iov_md": false 00:04:30.156 }, 00:04:30.156 "memory_domains": [ 00:04:30.156 { 00:04:30.156 "dma_device_id": "system", 00:04:30.156 "dma_device_type": 1 00:04:30.156 }, 00:04:30.156 { 00:04:30.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.156 "dma_device_type": 2 00:04:30.156 } 00:04:30.156 ], 00:04:30.156 "driver_specific": {} 00:04:30.156 } 00:04:30.156 ]' 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:30.156 11:27:55 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:30.156 00:04:30.156 real 0m0.157s 00:04:30.156 user 0m0.096s 00:04:30.156 sys 0m0.023s 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.156 11:27:55 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:30.156 ************************************ 00:04:30.156 END TEST rpc_plugins 00:04:30.156 ************************************ 00:04:30.417 11:27:55 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:30.417 11:27:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.417 11:27:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.417 11:27:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.417 ************************************ 00:04:30.417 START TEST rpc_trace_cmd_test 00:04:30.417 ************************************ 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:30.417 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid830750", 00:04:30.417 "tpoint_group_mask": "0x8", 00:04:30.417 "iscsi_conn": { 00:04:30.417 "mask": "0x2", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "scsi": { 00:04:30.417 "mask": "0x4", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "bdev": { 00:04:30.417 "mask": "0x8", 00:04:30.417 "tpoint_mask": "0xffffffffffffffff" 00:04:30.417 }, 00:04:30.417 "nvmf_rdma": { 00:04:30.417 "mask": "0x10", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "nvmf_tcp": { 00:04:30.417 "mask": "0x20", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "ftl": { 00:04:30.417 "mask": "0x40", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "blobfs": { 00:04:30.417 "mask": "0x80", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "dsa": { 00:04:30.417 "mask": "0x200", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "thread": { 00:04:30.417 "mask": "0x400", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "nvme_pcie": { 00:04:30.417 "mask": "0x800", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "iaa": { 00:04:30.417 "mask": "0x1000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "nvme_tcp": { 00:04:30.417 "mask": "0x2000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "bdev_nvme": { 00:04:30.417 "mask": "0x4000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "sock": { 00:04:30.417 "mask": "0x8000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "blob": { 00:04:30.417 "mask": "0x10000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "bdev_raid": { 00:04:30.417 "mask": "0x20000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 }, 00:04:30.417 "scheduler": { 00:04:30.417 "mask": "0x40000", 00:04:30.417 "tpoint_mask": "0x0" 00:04:30.417 } 00:04:30.417 }' 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:30.417 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:30.678 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:30.678 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:30.678 11:27:55 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:30.678 00:04:30.678 real 0m0.252s 00:04:30.678 user 0m0.212s 00:04:30.678 sys 0m0.032s 00:04:30.678 11:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.678 11:27:55 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:30.678 ************************************ 00:04:30.678 END TEST rpc_trace_cmd_test 00:04:30.678 ************************************ 00:04:30.678 11:27:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:30.678 11:27:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:30.678 11:27:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:30.678 11:27:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.678 11:27:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.678 11:27:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.678 ************************************ 00:04:30.678 START TEST rpc_daemon_integrity 00:04:30.678 ************************************ 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.678 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.679 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.679 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:30.679 { 00:04:30.679 "name": "Malloc2", 00:04:30.679 "aliases": [ 00:04:30.679 "3d14c83b-3192-44ac-8ca7-63e80cb30670" 00:04:30.679 ], 00:04:30.679 "product_name": "Malloc disk", 00:04:30.679 "block_size": 512, 00:04:30.679 "num_blocks": 16384, 00:04:30.679 "uuid": "3d14c83b-3192-44ac-8ca7-63e80cb30670", 00:04:30.679 "assigned_rate_limits": { 00:04:30.679 "rw_ios_per_sec": 0, 00:04:30.679 "rw_mbytes_per_sec": 0, 00:04:30.679 "r_mbytes_per_sec": 0, 00:04:30.679 "w_mbytes_per_sec": 0 00:04:30.679 }, 00:04:30.679 "claimed": false, 00:04:30.679 "zoned": false, 00:04:30.679 "supported_io_types": { 00:04:30.679 "read": true, 00:04:30.679 "write": true, 00:04:30.679 "unmap": true, 00:04:30.679 "flush": true, 00:04:30.679 "reset": true, 00:04:30.679 "nvme_admin": false, 00:04:30.679 "nvme_io": false, 00:04:30.679 "nvme_io_md": false, 00:04:30.679 "write_zeroes": true, 00:04:30.679 "zcopy": true, 00:04:30.679 "get_zone_info": false, 00:04:30.679 "zone_management": false, 00:04:30.679 "zone_append": false, 00:04:30.679 "compare": false, 00:04:30.679 "compare_and_write": false, 00:04:30.679 "abort": true, 00:04:30.679 "seek_hole": false, 00:04:30.679 "seek_data": false, 00:04:30.679 "copy": true, 00:04:30.679 "nvme_iov_md": false 00:04:30.679 }, 00:04:30.679 "memory_domains": [ 00:04:30.679 { 00:04:30.679 "dma_device_id": "system", 00:04:30.679 "dma_device_type": 1 00:04:30.679 }, 00:04:30.679 { 00:04:30.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.679 "dma_device_type": 2 00:04:30.679 } 00:04:30.679 ], 00:04:30.679 "driver_specific": {} 00:04:30.679 } 00:04:30.679 ]' 00:04:30.679 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.940 [2024-11-15 11:27:56.199948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:30.940 [2024-11-15 11:27:56.199995] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:30.940 [2024-11-15 11:27:56.200012] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13dc920 00:04:30.940 [2024-11-15 11:27:56.200019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:30.940 [2024-11-15 11:27:56.201616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:30.940 [2024-11-15 11:27:56.201652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:30.940 Passthru0 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:30.940 { 00:04:30.940 "name": "Malloc2", 00:04:30.940 "aliases": [ 00:04:30.940 "3d14c83b-3192-44ac-8ca7-63e80cb30670" 00:04:30.940 ], 00:04:30.940 "product_name": "Malloc disk", 00:04:30.940 "block_size": 512, 00:04:30.940 "num_blocks": 16384, 00:04:30.940 "uuid": "3d14c83b-3192-44ac-8ca7-63e80cb30670", 00:04:30.940 "assigned_rate_limits": { 00:04:30.940 "rw_ios_per_sec": 0, 00:04:30.940 "rw_mbytes_per_sec": 0, 00:04:30.940 "r_mbytes_per_sec": 0, 00:04:30.940 "w_mbytes_per_sec": 0 00:04:30.940 }, 00:04:30.940 "claimed": true, 00:04:30.940 "claim_type": "exclusive_write", 00:04:30.940 "zoned": false, 00:04:30.940 "supported_io_types": { 00:04:30.940 "read": true, 00:04:30.940 "write": true, 00:04:30.940 "unmap": true, 00:04:30.940 "flush": true, 00:04:30.940 "reset": true, 00:04:30.940 "nvme_admin": false, 00:04:30.940 "nvme_io": false, 00:04:30.940 "nvme_io_md": false, 00:04:30.940 "write_zeroes": true, 00:04:30.940 "zcopy": true, 00:04:30.940 "get_zone_info": false, 00:04:30.940 "zone_management": false, 00:04:30.940 "zone_append": false, 00:04:30.940 "compare": false, 00:04:30.940 "compare_and_write": false, 00:04:30.940 "abort": true, 00:04:30.940 "seek_hole": false, 00:04:30.940 "seek_data": false, 00:04:30.940 "copy": true, 00:04:30.940 "nvme_iov_md": false 00:04:30.940 }, 00:04:30.940 "memory_domains": [ 00:04:30.940 { 00:04:30.940 "dma_device_id": "system", 00:04:30.940 "dma_device_type": 1 00:04:30.940 }, 00:04:30.940 { 00:04:30.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.940 "dma_device_type": 2 00:04:30.940 } 00:04:30.940 ], 00:04:30.940 "driver_specific": {} 00:04:30.940 }, 00:04:30.940 { 00:04:30.940 "name": "Passthru0", 00:04:30.940 "aliases": [ 00:04:30.940 "3890849a-ba2f-513b-b50b-48840e3f063d" 00:04:30.940 ], 00:04:30.940 "product_name": "passthru", 00:04:30.940 "block_size": 512, 00:04:30.940 "num_blocks": 16384, 00:04:30.940 "uuid": "3890849a-ba2f-513b-b50b-48840e3f063d", 00:04:30.940 "assigned_rate_limits": { 00:04:30.940 "rw_ios_per_sec": 0, 00:04:30.940 "rw_mbytes_per_sec": 0, 00:04:30.940 "r_mbytes_per_sec": 0, 00:04:30.940 "w_mbytes_per_sec": 0 00:04:30.940 }, 00:04:30.940 "claimed": false, 00:04:30.940 "zoned": false, 00:04:30.940 "supported_io_types": { 00:04:30.940 "read": true, 00:04:30.940 "write": true, 00:04:30.940 "unmap": true, 00:04:30.940 "flush": true, 00:04:30.940 "reset": true, 00:04:30.940 "nvme_admin": false, 00:04:30.940 "nvme_io": false, 00:04:30.940 "nvme_io_md": false, 00:04:30.940 "write_zeroes": true, 00:04:30.940 "zcopy": true, 00:04:30.940 "get_zone_info": false, 00:04:30.940 "zone_management": false, 00:04:30.940 "zone_append": false, 00:04:30.940 "compare": false, 00:04:30.940 "compare_and_write": false, 00:04:30.940 "abort": true, 00:04:30.940 "seek_hole": false, 00:04:30.940 "seek_data": false, 00:04:30.940 "copy": true, 00:04:30.940 "nvme_iov_md": false 00:04:30.940 }, 00:04:30.940 "memory_domains": [ 00:04:30.940 { 00:04:30.940 "dma_device_id": "system", 00:04:30.940 "dma_device_type": 1 00:04:30.940 }, 00:04:30.940 { 00:04:30.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:30.940 "dma_device_type": 2 00:04:30.940 } 00:04:30.940 ], 00:04:30.940 "driver_specific": { 00:04:30.940 "passthru": { 00:04:30.940 "name": "Passthru0", 00:04:30.940 "base_bdev_name": "Malloc2" 00:04:30.940 } 00:04:30.940 } 00:04:30.940 } 00:04:30.940 ]' 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:30.940 00:04:30.940 real 0m0.309s 00:04:30.940 user 0m0.188s 00:04:30.940 sys 0m0.053s 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:30.940 11:27:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:30.940 ************************************ 00:04:30.940 END TEST rpc_daemon_integrity 00:04:30.940 ************************************ 00:04:30.940 11:27:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:30.940 11:27:56 rpc -- rpc/rpc.sh@84 -- # killprocess 830750 00:04:30.940 11:27:56 rpc -- common/autotest_common.sh@952 -- # '[' -z 830750 ']' 00:04:30.940 11:27:56 rpc -- common/autotest_common.sh@956 -- # kill -0 830750 00:04:30.940 11:27:56 rpc -- common/autotest_common.sh@957 -- # uname 00:04:30.940 11:27:56 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:30.940 11:27:56 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 830750 00:04:31.200 11:27:56 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.200 11:27:56 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.200 11:27:56 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 830750' 00:04:31.200 killing process with pid 830750 00:04:31.200 11:27:56 rpc -- common/autotest_common.sh@971 -- # kill 830750 00:04:31.200 11:27:56 rpc -- common/autotest_common.sh@976 -- # wait 830750 00:04:31.460 00:04:31.460 real 0m2.745s 00:04:31.460 user 0m3.476s 00:04:31.460 sys 0m0.867s 00:04:31.460 11:27:56 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.460 11:27:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.460 ************************************ 00:04:31.460 END TEST rpc 00:04:31.460 ************************************ 00:04:31.460 11:27:56 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.460 11:27:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.460 11:27:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.460 11:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:31.460 ************************************ 00:04:31.460 START TEST skip_rpc 00:04:31.460 ************************************ 00:04:31.460 11:27:56 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:31.460 * Looking for test storage... 00:04:31.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.461 11:27:56 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:31.461 11:27:56 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:31.461 11:27:56 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:31.721 11:27:56 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:31.721 11:27:56 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:31.722 11:27:56 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:31.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.722 --rc genhtml_branch_coverage=1 00:04:31.722 --rc genhtml_function_coverage=1 00:04:31.722 --rc genhtml_legend=1 00:04:31.722 --rc geninfo_all_blocks=1 00:04:31.722 --rc geninfo_unexecuted_blocks=1 00:04:31.722 00:04:31.722 ' 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:31.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.722 --rc genhtml_branch_coverage=1 00:04:31.722 --rc genhtml_function_coverage=1 00:04:31.722 --rc genhtml_legend=1 00:04:31.722 --rc geninfo_all_blocks=1 00:04:31.722 --rc geninfo_unexecuted_blocks=1 00:04:31.722 00:04:31.722 ' 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:31.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.722 --rc genhtml_branch_coverage=1 00:04:31.722 --rc genhtml_function_coverage=1 00:04:31.722 --rc genhtml_legend=1 00:04:31.722 --rc geninfo_all_blocks=1 00:04:31.722 --rc geninfo_unexecuted_blocks=1 00:04:31.722 00:04:31.722 ' 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:31.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:31.722 --rc genhtml_branch_coverage=1 00:04:31.722 --rc genhtml_function_coverage=1 00:04:31.722 --rc genhtml_legend=1 00:04:31.722 --rc geninfo_all_blocks=1 00:04:31.722 --rc geninfo_unexecuted_blocks=1 00:04:31.722 00:04:31.722 ' 00:04:31.722 11:27:56 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:31.722 11:27:56 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:31.722 11:27:56 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.722 11:27:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.722 ************************************ 00:04:31.722 START TEST skip_rpc 00:04:31.722 ************************************ 00:04:31.722 11:27:57 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:31.722 11:27:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=831597 00:04:31.722 11:27:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.722 11:27:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:31.722 11:27:57 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:31.722 [2024-11-15 11:27:57.084029] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:31.722 [2024-11-15 11:27:57.084090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid831597 ] 00:04:31.722 [2024-11-15 11:27:57.177083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.982 [2024-11-15 11:27:57.230002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 831597 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 831597 ']' 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 831597 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:37.271 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 831597 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 831597' 00:04:37.272 killing process with pid 831597 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 831597 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 831597 00:04:37.272 00:04:37.272 real 0m5.266s 00:04:37.272 user 0m5.014s 00:04:37.272 sys 0m0.300s 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.272 11:28:02 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.272 ************************************ 00:04:37.272 END TEST skip_rpc 00:04:37.272 ************************************ 00:04:37.272 11:28:02 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:37.272 11:28:02 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.272 11:28:02 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.272 11:28:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.272 ************************************ 00:04:37.272 START TEST skip_rpc_with_json 00:04:37.272 ************************************ 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=832636 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 832636 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 832636 ']' 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:37.272 11:28:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.272 [2024-11-15 11:28:02.423944] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:37.272 [2024-11-15 11:28:02.423992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid832636 ] 00:04:37.272 [2024-11-15 11:28:02.507907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.272 [2024-11-15 11:28:02.539080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.844 [2024-11-15 11:28:03.208370] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:37.844 request: 00:04:37.844 { 00:04:37.844 "trtype": "tcp", 00:04:37.844 "method": "nvmf_get_transports", 00:04:37.844 "req_id": 1 00:04:37.844 } 00:04:37.844 Got JSON-RPC error response 00:04:37.844 response: 00:04:37.844 { 00:04:37.844 "code": -19, 00:04:37.844 "message": "No such device" 00:04:37.844 } 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:37.844 [2024-11-15 11:28:03.220466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.844 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:38.105 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:38.105 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:38.105 { 00:04:38.105 "subsystems": [ 00:04:38.105 { 00:04:38.105 "subsystem": "fsdev", 00:04:38.105 "config": [ 00:04:38.105 { 00:04:38.105 "method": "fsdev_set_opts", 00:04:38.105 "params": { 00:04:38.105 "fsdev_io_pool_size": 65535, 00:04:38.105 "fsdev_io_cache_size": 256 00:04:38.105 } 00:04:38.105 } 00:04:38.105 ] 00:04:38.105 }, 00:04:38.105 { 00:04:38.105 "subsystem": "vfio_user_target", 00:04:38.105 "config": null 00:04:38.105 }, 00:04:38.105 { 00:04:38.105 "subsystem": "keyring", 00:04:38.105 "config": [] 00:04:38.105 }, 00:04:38.105 { 00:04:38.105 "subsystem": "iobuf", 00:04:38.105 "config": [ 00:04:38.105 { 00:04:38.105 "method": "iobuf_set_options", 00:04:38.105 "params": { 00:04:38.105 "small_pool_count": 8192, 00:04:38.105 "large_pool_count": 1024, 00:04:38.106 "small_bufsize": 8192, 00:04:38.106 "large_bufsize": 135168, 00:04:38.106 "enable_numa": false 00:04:38.106 } 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "sock", 00:04:38.106 "config": [ 00:04:38.106 { 00:04:38.106 "method": "sock_set_default_impl", 00:04:38.106 "params": { 00:04:38.106 "impl_name": "posix" 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "sock_impl_set_options", 00:04:38.106 "params": { 00:04:38.106 "impl_name": "ssl", 00:04:38.106 "recv_buf_size": 4096, 00:04:38.106 "send_buf_size": 4096, 00:04:38.106 "enable_recv_pipe": true, 00:04:38.106 "enable_quickack": false, 00:04:38.106 "enable_placement_id": 0, 00:04:38.106 "enable_zerocopy_send_server": true, 00:04:38.106 "enable_zerocopy_send_client": false, 00:04:38.106 "zerocopy_threshold": 0, 00:04:38.106 "tls_version": 0, 00:04:38.106 "enable_ktls": false 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "sock_impl_set_options", 00:04:38.106 "params": { 00:04:38.106 "impl_name": "posix", 00:04:38.106 "recv_buf_size": 2097152, 00:04:38.106 "send_buf_size": 2097152, 00:04:38.106 "enable_recv_pipe": true, 00:04:38.106 "enable_quickack": false, 00:04:38.106 "enable_placement_id": 0, 00:04:38.106 "enable_zerocopy_send_server": true, 00:04:38.106 "enable_zerocopy_send_client": false, 00:04:38.106 "zerocopy_threshold": 0, 00:04:38.106 "tls_version": 0, 00:04:38.106 "enable_ktls": false 00:04:38.106 } 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "vmd", 00:04:38.106 "config": [] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "accel", 00:04:38.106 "config": [ 00:04:38.106 { 00:04:38.106 "method": "accel_set_options", 00:04:38.106 "params": { 00:04:38.106 "small_cache_size": 128, 00:04:38.106 "large_cache_size": 16, 00:04:38.106 "task_count": 2048, 00:04:38.106 "sequence_count": 2048, 00:04:38.106 "buf_count": 2048 00:04:38.106 } 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "bdev", 00:04:38.106 "config": [ 00:04:38.106 { 00:04:38.106 "method": "bdev_set_options", 00:04:38.106 "params": { 00:04:38.106 "bdev_io_pool_size": 65535, 00:04:38.106 "bdev_io_cache_size": 256, 00:04:38.106 "bdev_auto_examine": true, 00:04:38.106 "iobuf_small_cache_size": 128, 00:04:38.106 "iobuf_large_cache_size": 16 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "bdev_raid_set_options", 00:04:38.106 "params": { 00:04:38.106 "process_window_size_kb": 1024, 00:04:38.106 "process_max_bandwidth_mb_sec": 0 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "bdev_iscsi_set_options", 00:04:38.106 "params": { 00:04:38.106 "timeout_sec": 30 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "bdev_nvme_set_options", 00:04:38.106 "params": { 00:04:38.106 "action_on_timeout": "none", 00:04:38.106 "timeout_us": 0, 00:04:38.106 "timeout_admin_us": 0, 00:04:38.106 "keep_alive_timeout_ms": 10000, 00:04:38.106 "arbitration_burst": 0, 00:04:38.106 "low_priority_weight": 0, 00:04:38.106 "medium_priority_weight": 0, 00:04:38.106 "high_priority_weight": 0, 00:04:38.106 "nvme_adminq_poll_period_us": 10000, 00:04:38.106 "nvme_ioq_poll_period_us": 0, 00:04:38.106 "io_queue_requests": 0, 00:04:38.106 "delay_cmd_submit": true, 00:04:38.106 "transport_retry_count": 4, 00:04:38.106 "bdev_retry_count": 3, 00:04:38.106 "transport_ack_timeout": 0, 00:04:38.106 "ctrlr_loss_timeout_sec": 0, 00:04:38.106 "reconnect_delay_sec": 0, 00:04:38.106 "fast_io_fail_timeout_sec": 0, 00:04:38.106 "disable_auto_failback": false, 00:04:38.106 "generate_uuids": false, 00:04:38.106 "transport_tos": 0, 00:04:38.106 "nvme_error_stat": false, 00:04:38.106 "rdma_srq_size": 0, 00:04:38.106 "io_path_stat": false, 00:04:38.106 "allow_accel_sequence": false, 00:04:38.106 "rdma_max_cq_size": 0, 00:04:38.106 "rdma_cm_event_timeout_ms": 0, 00:04:38.106 "dhchap_digests": [ 00:04:38.106 "sha256", 00:04:38.106 "sha384", 00:04:38.106 "sha512" 00:04:38.106 ], 00:04:38.106 "dhchap_dhgroups": [ 00:04:38.106 "null", 00:04:38.106 "ffdhe2048", 00:04:38.106 "ffdhe3072", 00:04:38.106 "ffdhe4096", 00:04:38.106 "ffdhe6144", 00:04:38.106 "ffdhe8192" 00:04:38.106 ] 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "bdev_nvme_set_hotplug", 00:04:38.106 "params": { 00:04:38.106 "period_us": 100000, 00:04:38.106 "enable": false 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "bdev_wait_for_examine" 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "scsi", 00:04:38.106 "config": null 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "scheduler", 00:04:38.106 "config": [ 00:04:38.106 { 00:04:38.106 "method": "framework_set_scheduler", 00:04:38.106 "params": { 00:04:38.106 "name": "static" 00:04:38.106 } 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "vhost_scsi", 00:04:38.106 "config": [] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "vhost_blk", 00:04:38.106 "config": [] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "ublk", 00:04:38.106 "config": [] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "nbd", 00:04:38.106 "config": [] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "nvmf", 00:04:38.106 "config": [ 00:04:38.106 { 00:04:38.106 "method": "nvmf_set_config", 00:04:38.106 "params": { 00:04:38.106 "discovery_filter": "match_any", 00:04:38.106 "admin_cmd_passthru": { 00:04:38.106 "identify_ctrlr": false 00:04:38.106 }, 00:04:38.106 "dhchap_digests": [ 00:04:38.106 "sha256", 00:04:38.106 "sha384", 00:04:38.106 "sha512" 00:04:38.106 ], 00:04:38.106 "dhchap_dhgroups": [ 00:04:38.106 "null", 00:04:38.106 "ffdhe2048", 00:04:38.106 "ffdhe3072", 00:04:38.106 "ffdhe4096", 00:04:38.106 "ffdhe6144", 00:04:38.106 "ffdhe8192" 00:04:38.106 ] 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "nvmf_set_max_subsystems", 00:04:38.106 "params": { 00:04:38.106 "max_subsystems": 1024 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "nvmf_set_crdt", 00:04:38.106 "params": { 00:04:38.106 "crdt1": 0, 00:04:38.106 "crdt2": 0, 00:04:38.106 "crdt3": 0 00:04:38.106 } 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "method": "nvmf_create_transport", 00:04:38.106 "params": { 00:04:38.106 "trtype": "TCP", 00:04:38.106 "max_queue_depth": 128, 00:04:38.106 "max_io_qpairs_per_ctrlr": 127, 00:04:38.106 "in_capsule_data_size": 4096, 00:04:38.106 "max_io_size": 131072, 00:04:38.106 "io_unit_size": 131072, 00:04:38.106 "max_aq_depth": 128, 00:04:38.106 "num_shared_buffers": 511, 00:04:38.106 "buf_cache_size": 4294967295, 00:04:38.106 "dif_insert_or_strip": false, 00:04:38.106 "zcopy": false, 00:04:38.106 "c2h_success": true, 00:04:38.106 "sock_priority": 0, 00:04:38.106 "abort_timeout_sec": 1, 00:04:38.106 "ack_timeout": 0, 00:04:38.106 "data_wr_pool_size": 0 00:04:38.106 } 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 }, 00:04:38.106 { 00:04:38.106 "subsystem": "iscsi", 00:04:38.106 "config": [ 00:04:38.106 { 00:04:38.106 "method": "iscsi_set_options", 00:04:38.106 "params": { 00:04:38.106 "node_base": "iqn.2016-06.io.spdk", 00:04:38.106 "max_sessions": 128, 00:04:38.106 "max_connections_per_session": 2, 00:04:38.106 "max_queue_depth": 64, 00:04:38.106 "default_time2wait": 2, 00:04:38.106 "default_time2retain": 20, 00:04:38.106 "first_burst_length": 8192, 00:04:38.106 "immediate_data": true, 00:04:38.106 "allow_duplicated_isid": false, 00:04:38.106 "error_recovery_level": 0, 00:04:38.106 "nop_timeout": 60, 00:04:38.106 "nop_in_interval": 30, 00:04:38.106 "disable_chap": false, 00:04:38.106 "require_chap": false, 00:04:38.106 "mutual_chap": false, 00:04:38.106 "chap_group": 0, 00:04:38.106 "max_large_datain_per_connection": 64, 00:04:38.106 "max_r2t_per_connection": 4, 00:04:38.106 "pdu_pool_size": 36864, 00:04:38.106 "immediate_data_pool_size": 16384, 00:04:38.106 "data_out_pool_size": 2048 00:04:38.106 } 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 } 00:04:38.106 ] 00:04:38.106 } 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 832636 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 832636 ']' 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 832636 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 832636 00:04:38.106 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:38.107 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:38.107 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 832636' 00:04:38.107 killing process with pid 832636 00:04:38.107 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 832636 00:04:38.107 11:28:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 832636 00:04:38.367 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=832978 00:04:38.367 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:38.367 11:28:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 832978 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 832978 ']' 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 832978 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 832978 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 832978' 00:04:43.650 killing process with pid 832978 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 832978 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 832978 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:43.650 00:04:43.650 real 0m6.545s 00:04:43.650 user 0m6.440s 00:04:43.650 sys 0m0.566s 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.650 ************************************ 00:04:43.650 END TEST skip_rpc_with_json 00:04:43.650 ************************************ 00:04:43.650 11:28:08 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:43.650 11:28:08 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.650 11:28:08 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.650 11:28:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.650 ************************************ 00:04:43.650 START TEST skip_rpc_with_delay 00:04:43.650 ************************************ 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:43.650 11:28:08 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:43.650 [2024-11-15 11:28:09.049394] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:43.650 11:28:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:43.650 11:28:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:43.650 11:28:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:43.650 11:28:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:43.650 00:04:43.650 real 0m0.077s 00:04:43.650 user 0m0.049s 00:04:43.650 sys 0m0.027s 00:04:43.650 11:28:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:43.650 11:28:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:43.650 ************************************ 00:04:43.650 END TEST skip_rpc_with_delay 00:04:43.650 ************************************ 00:04:43.650 11:28:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:43.650 11:28:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:43.650 11:28:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:43.650 11:28:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.650 11:28:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.650 11:28:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.650 ************************************ 00:04:43.650 START TEST exit_on_failed_rpc_init 00:04:43.650 ************************************ 00:04:43.650 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=834040 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 834040 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 834040 ']' 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.911 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:43.911 [2024-11-15 11:28:09.204023] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:43.911 [2024-11-15 11:28:09.204080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834040 ] 00:04:43.911 [2024-11-15 11:28:09.292232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.911 [2024-11-15 11:28:09.326760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.855 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.855 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:44.855 11:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.855 11:28:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.855 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:44.855 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:44.856 11:28:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:44.856 [2024-11-15 11:28:10.057154] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:44.856 [2024-11-15 11:28:10.057206] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834262 ] 00:04:44.856 [2024-11-15 11:28:10.142863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.856 [2024-11-15 11:28:10.179103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.856 [2024-11-15 11:28:10.179150] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:44.856 [2024-11-15 11:28:10.179160] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:44.856 [2024-11-15 11:28:10.179167] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 834040 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 834040 ']' 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 834040 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 834040 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 834040' 00:04:44.856 killing process with pid 834040 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 834040 00:04:44.856 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 834040 00:04:45.117 00:04:45.117 real 0m1.325s 00:04:45.117 user 0m1.526s 00:04:45.117 sys 0m0.404s 00:04:45.117 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.117 11:28:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:45.117 ************************************ 00:04:45.117 END TEST exit_on_failed_rpc_init 00:04:45.117 ************************************ 00:04:45.117 11:28:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:45.117 00:04:45.117 real 0m13.723s 00:04:45.117 user 0m13.256s 00:04:45.117 sys 0m1.613s 00:04:45.117 11:28:10 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.117 11:28:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.117 ************************************ 00:04:45.117 END TEST skip_rpc 00:04:45.117 ************************************ 00:04:45.117 11:28:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:45.117 11:28:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.117 11:28:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.117 11:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:45.117 ************************************ 00:04:45.117 START TEST rpc_client 00:04:45.117 ************************************ 00:04:45.117 11:28:10 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:45.378 * Looking for test storage... 00:04:45.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.378 11:28:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.378 --rc genhtml_branch_coverage=1 00:04:45.378 --rc genhtml_function_coverage=1 00:04:45.378 --rc genhtml_legend=1 00:04:45.378 --rc geninfo_all_blocks=1 00:04:45.378 --rc geninfo_unexecuted_blocks=1 00:04:45.378 00:04:45.378 ' 00:04:45.378 11:28:10 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.378 --rc genhtml_branch_coverage=1 00:04:45.378 --rc genhtml_function_coverage=1 00:04:45.378 --rc genhtml_legend=1 00:04:45.378 --rc geninfo_all_blocks=1 00:04:45.379 --rc geninfo_unexecuted_blocks=1 00:04:45.379 00:04:45.379 ' 00:04:45.379 11:28:10 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.379 --rc genhtml_branch_coverage=1 00:04:45.379 --rc genhtml_function_coverage=1 00:04:45.379 --rc genhtml_legend=1 00:04:45.379 --rc geninfo_all_blocks=1 00:04:45.379 --rc geninfo_unexecuted_blocks=1 00:04:45.379 00:04:45.379 ' 00:04:45.379 11:28:10 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.379 --rc genhtml_branch_coverage=1 00:04:45.379 --rc genhtml_function_coverage=1 00:04:45.379 --rc genhtml_legend=1 00:04:45.379 --rc geninfo_all_blocks=1 00:04:45.379 --rc geninfo_unexecuted_blocks=1 00:04:45.379 00:04:45.379 ' 00:04:45.379 11:28:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:45.379 OK 00:04:45.379 11:28:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:45.379 00:04:45.379 real 0m0.230s 00:04:45.379 user 0m0.133s 00:04:45.379 sys 0m0.111s 00:04:45.379 11:28:10 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:45.379 11:28:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:45.379 ************************************ 00:04:45.379 END TEST rpc_client 00:04:45.379 ************************************ 00:04:45.379 11:28:10 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.379 11:28:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.379 11:28:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.379 11:28:10 -- common/autotest_common.sh@10 -- # set +x 00:04:45.641 ************************************ 00:04:45.641 START TEST json_config 00:04:45.641 ************************************ 00:04:45.641 11:28:10 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:45.641 11:28:10 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:45.641 11:28:10 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:45.641 11:28:10 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:45.641 11:28:11 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.641 11:28:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.641 11:28:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.641 11:28:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.641 11:28:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.641 11:28:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.641 11:28:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:45.641 11:28:11 json_config -- scripts/common.sh@345 -- # : 1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.641 11:28:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.641 11:28:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@353 -- # local d=1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.641 11:28:11 json_config -- scripts/common.sh@355 -- # echo 1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.641 11:28:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@353 -- # local d=2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.641 11:28:11 json_config -- scripts/common.sh@355 -- # echo 2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.641 11:28:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.641 11:28:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.641 11:28:11 json_config -- scripts/common.sh@368 -- # return 0 00:04:45.641 11:28:11 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.641 11:28:11 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:45.641 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.641 --rc genhtml_branch_coverage=1 00:04:45.641 --rc genhtml_function_coverage=1 00:04:45.641 --rc genhtml_legend=1 00:04:45.641 --rc geninfo_all_blocks=1 00:04:45.642 --rc geninfo_unexecuted_blocks=1 00:04:45.642 00:04:45.642 ' 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.642 --rc genhtml_branch_coverage=1 00:04:45.642 --rc genhtml_function_coverage=1 00:04:45.642 --rc genhtml_legend=1 00:04:45.642 --rc geninfo_all_blocks=1 00:04:45.642 --rc geninfo_unexecuted_blocks=1 00:04:45.642 00:04:45.642 ' 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.642 --rc genhtml_branch_coverage=1 00:04:45.642 --rc genhtml_function_coverage=1 00:04:45.642 --rc genhtml_legend=1 00:04:45.642 --rc geninfo_all_blocks=1 00:04:45.642 --rc geninfo_unexecuted_blocks=1 00:04:45.642 00:04:45.642 ' 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.642 --rc genhtml_branch_coverage=1 00:04:45.642 --rc genhtml_function_coverage=1 00:04:45.642 --rc genhtml_legend=1 00:04:45.642 --rc geninfo_all_blocks=1 00:04:45.642 --rc geninfo_unexecuted_blocks=1 00:04:45.642 00:04:45.642 ' 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:45.642 11:28:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:45.642 11:28:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:45.642 11:28:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:45.642 11:28:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:45.642 11:28:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.642 11:28:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.642 11:28:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.642 11:28:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:45.642 11:28:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@51 -- # : 0 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:45.642 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:45.642 11:28:11 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:45.642 INFO: JSON configuration test init 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.642 11:28:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.642 11:28:11 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:45.642 11:28:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:45.642 11:28:11 json_config -- json_config/common.sh@10 -- # shift 00:04:45.642 11:28:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:45.642 11:28:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:45.642 11:28:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:45.642 11:28:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.642 11:28:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:45.642 11:28:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=834514 00:04:45.643 11:28:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:45.643 Waiting for target to run... 00:04:45.643 11:28:11 json_config -- json_config/common.sh@25 -- # waitforlisten 834514 /var/tmp/spdk_tgt.sock 00:04:45.643 11:28:11 json_config -- common/autotest_common.sh@833 -- # '[' -z 834514 ']' 00:04:45.643 11:28:11 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:45.643 11:28:11 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:45.643 11:28:11 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:45.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:45.643 11:28:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:45.643 11:28:11 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:45.643 11:28:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.904 [2024-11-15 11:28:11.165675] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:45.904 [2024-11-15 11:28:11.165730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid834514 ] 00:04:46.165 [2024-11-15 11:28:11.562434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.165 [2024-11-15 11:28:11.595741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.738 11:28:11 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:46.738 11:28:11 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:46.738 11:28:11 json_config -- json_config/common.sh@26 -- # echo '' 00:04:46.738 00:04:46.738 11:28:11 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:46.738 11:28:11 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:46.738 11:28:11 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:46.738 11:28:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.738 11:28:11 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:46.738 11:28:11 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:46.738 11:28:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.738 11:28:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:46.738 11:28:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:46.738 11:28:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:46.738 11:28:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:47.310 11:28:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.310 11:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:47.310 11:28:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@54 -- # sort 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:47.310 11:28:12 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.310 11:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:47.310 11:28:12 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.310 11:28:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:47.310 11:28:12 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.310 11:28:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.571 MallocForNvmf0 00:04:47.571 11:28:12 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.571 11:28:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.833 MallocForNvmf1 00:04:47.833 11:28:13 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.833 11:28:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.833 [2024-11-15 11:28:13.317778] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.094 11:28:13 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.094 11:28:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:48.094 11:28:13 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.094 11:28:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:48.354 11:28:13 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.354 11:28:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:48.615 11:28:13 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.615 11:28:13 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:48.615 [2024-11-15 11:28:14.048001] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:48.615 11:28:14 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:48.615 11:28:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.615 11:28:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.875 11:28:14 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:48.875 11:28:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.875 11:28:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.875 11:28:14 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:48.876 11:28:14 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.876 11:28:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:48.876 MallocBdevForConfigChangeCheck 00:04:48.876 11:28:14 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:48.876 11:28:14 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.876 11:28:14 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:48.876 11:28:14 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:48.876 11:28:14 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.451 11:28:14 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:49.451 INFO: shutting down applications... 00:04:49.451 11:28:14 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:49.451 11:28:14 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:49.451 11:28:14 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:49.451 11:28:14 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:49.711 Calling clear_iscsi_subsystem 00:04:49.711 Calling clear_nvmf_subsystem 00:04:49.711 Calling clear_nbd_subsystem 00:04:49.711 Calling clear_ublk_subsystem 00:04:49.711 Calling clear_vhost_blk_subsystem 00:04:49.711 Calling clear_vhost_scsi_subsystem 00:04:49.711 Calling clear_bdev_subsystem 00:04:49.711 11:28:15 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:49.711 11:28:15 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:49.711 11:28:15 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:49.711 11:28:15 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:49.711 11:28:15 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:49.711 11:28:15 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:50.283 11:28:15 json_config -- json_config/json_config.sh@352 -- # break 00:04:50.283 11:28:15 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:50.283 11:28:15 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:50.283 11:28:15 json_config -- json_config/common.sh@31 -- # local app=target 00:04:50.283 11:28:15 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:50.283 11:28:15 json_config -- json_config/common.sh@35 -- # [[ -n 834514 ]] 00:04:50.283 11:28:15 json_config -- json_config/common.sh@38 -- # kill -SIGINT 834514 00:04:50.283 11:28:15 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:50.283 11:28:15 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.283 11:28:15 json_config -- json_config/common.sh@41 -- # kill -0 834514 00:04:50.283 11:28:15 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:50.544 11:28:16 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:50.544 11:28:16 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:50.544 11:28:16 json_config -- json_config/common.sh@41 -- # kill -0 834514 00:04:50.544 11:28:16 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:50.544 11:28:16 json_config -- json_config/common.sh@43 -- # break 00:04:50.544 11:28:16 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:50.544 11:28:16 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:50.544 SPDK target shutdown done 00:04:50.544 11:28:16 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:50.544 INFO: relaunching applications... 00:04:50.544 11:28:16 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.544 11:28:16 json_config -- json_config/common.sh@9 -- # local app=target 00:04:50.544 11:28:16 json_config -- json_config/common.sh@10 -- # shift 00:04:50.544 11:28:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:50.544 11:28:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:50.544 11:28:16 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:50.544 11:28:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.544 11:28:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:50.544 11:28:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=835654 00:04:50.544 11:28:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:50.544 Waiting for target to run... 00:04:50.544 11:28:16 json_config -- json_config/common.sh@25 -- # waitforlisten 835654 /var/tmp/spdk_tgt.sock 00:04:50.544 11:28:16 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:50.544 11:28:16 json_config -- common/autotest_common.sh@833 -- # '[' -z 835654 ']' 00:04:50.544 11:28:16 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:50.544 11:28:16 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:50.544 11:28:16 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:50.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:50.544 11:28:16 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:50.544 11:28:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.804 [2024-11-15 11:28:16.093548] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:50.804 [2024-11-15 11:28:16.093612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid835654 ] 00:04:51.065 [2024-11-15 11:28:16.352184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.065 [2024-11-15 11:28:16.379324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.637 [2024-11-15 11:28:16.880173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.637 [2024-11-15 11:28:16.912545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:51.637 11:28:16 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:51.637 11:28:16 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:51.637 11:28:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:51.637 00:04:51.637 11:28:16 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:51.637 11:28:16 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:51.637 INFO: Checking if target configuration is the same... 00:04:51.637 11:28:16 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.637 11:28:16 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:51.637 11:28:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:51.637 + '[' 2 -ne 2 ']' 00:04:51.637 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:51.637 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:51.637 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:51.637 +++ basename /dev/fd/62 00:04:51.637 ++ mktemp /tmp/62.XXX 00:04:51.637 + tmp_file_1=/tmp/62.93W 00:04:51.637 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:51.637 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:51.637 + tmp_file_2=/tmp/spdk_tgt_config.json.zDr 00:04:51.637 + ret=0 00:04:51.637 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.897 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:51.897 + diff -u /tmp/62.93W /tmp/spdk_tgt_config.json.zDr 00:04:51.897 + echo 'INFO: JSON config files are the same' 00:04:51.897 INFO: JSON config files are the same 00:04:51.897 + rm /tmp/62.93W /tmp/spdk_tgt_config.json.zDr 00:04:51.897 + exit 0 00:04:51.897 11:28:17 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:51.897 11:28:17 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:51.897 INFO: changing configuration and checking if this can be detected... 00:04:51.897 11:28:17 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:51.897 11:28:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:52.158 11:28:17 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.158 11:28:17 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:52.158 11:28:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:52.158 + '[' 2 -ne 2 ']' 00:04:52.158 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:52.158 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:52.158 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:52.158 +++ basename /dev/fd/62 00:04:52.158 ++ mktemp /tmp/62.XXX 00:04:52.158 + tmp_file_1=/tmp/62.Q1G 00:04:52.158 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.158 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:52.158 + tmp_file_2=/tmp/spdk_tgt_config.json.3Kp 00:04:52.158 + ret=0 00:04:52.158 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.418 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:52.418 + diff -u /tmp/62.Q1G /tmp/spdk_tgt_config.json.3Kp 00:04:52.418 + ret=1 00:04:52.418 + echo '=== Start of file: /tmp/62.Q1G ===' 00:04:52.418 + cat /tmp/62.Q1G 00:04:52.418 + echo '=== End of file: /tmp/62.Q1G ===' 00:04:52.418 + echo '' 00:04:52.418 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3Kp ===' 00:04:52.418 + cat /tmp/spdk_tgt_config.json.3Kp 00:04:52.418 + echo '=== End of file: /tmp/spdk_tgt_config.json.3Kp ===' 00:04:52.418 + echo '' 00:04:52.418 + rm /tmp/62.Q1G /tmp/spdk_tgt_config.json.3Kp 00:04:52.418 + exit 1 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:52.418 INFO: configuration change detected. 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:52.418 11:28:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.418 11:28:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@324 -- # [[ -n 835654 ]] 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:52.418 11:28:17 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:52.418 11:28:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:52.418 11:28:17 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:52.418 11:28:17 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.418 11:28:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.679 11:28:17 json_config -- json_config/json_config.sh@330 -- # killprocess 835654 00:04:52.679 11:28:17 json_config -- common/autotest_common.sh@952 -- # '[' -z 835654 ']' 00:04:52.679 11:28:17 json_config -- common/autotest_common.sh@956 -- # kill -0 835654 00:04:52.679 11:28:17 json_config -- common/autotest_common.sh@957 -- # uname 00:04:52.679 11:28:17 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:52.679 11:28:17 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 835654 00:04:52.679 11:28:18 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:52.679 11:28:18 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:52.679 11:28:18 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 835654' 00:04:52.679 killing process with pid 835654 00:04:52.679 11:28:18 json_config -- common/autotest_common.sh@971 -- # kill 835654 00:04:52.679 11:28:18 json_config -- common/autotest_common.sh@976 -- # wait 835654 00:04:52.940 11:28:18 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:52.940 11:28:18 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:52.940 11:28:18 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:52.940 11:28:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.940 11:28:18 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:52.940 11:28:18 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:52.940 INFO: Success 00:04:52.940 00:04:52.940 real 0m7.424s 00:04:52.940 user 0m9.011s 00:04:52.940 sys 0m2.002s 00:04:52.940 11:28:18 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:52.940 11:28:18 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:52.940 ************************************ 00:04:52.940 END TEST json_config 00:04:52.940 ************************************ 00:04:52.940 11:28:18 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:52.940 11:28:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:52.940 11:28:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:52.940 11:28:18 -- common/autotest_common.sh@10 -- # set +x 00:04:52.940 ************************************ 00:04:52.940 START TEST json_config_extra_key 00:04:52.940 ************************************ 00:04:52.940 11:28:18 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.203 --rc genhtml_branch_coverage=1 00:04:53.203 --rc genhtml_function_coverage=1 00:04:53.203 --rc genhtml_legend=1 00:04:53.203 --rc geninfo_all_blocks=1 00:04:53.203 --rc geninfo_unexecuted_blocks=1 00:04:53.203 00:04:53.203 ' 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.203 --rc genhtml_branch_coverage=1 00:04:53.203 --rc genhtml_function_coverage=1 00:04:53.203 --rc genhtml_legend=1 00:04:53.203 --rc geninfo_all_blocks=1 00:04:53.203 --rc geninfo_unexecuted_blocks=1 00:04:53.203 00:04:53.203 ' 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.203 --rc genhtml_branch_coverage=1 00:04:53.203 --rc genhtml_function_coverage=1 00:04:53.203 --rc genhtml_legend=1 00:04:53.203 --rc geninfo_all_blocks=1 00:04:53.203 --rc geninfo_unexecuted_blocks=1 00:04:53.203 00:04:53.203 ' 00:04:53.203 11:28:18 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.203 --rc genhtml_branch_coverage=1 00:04:53.203 --rc genhtml_function_coverage=1 00:04:53.203 --rc genhtml_legend=1 00:04:53.203 --rc geninfo_all_blocks=1 00:04:53.203 --rc geninfo_unexecuted_blocks=1 00:04:53.203 00:04:53.203 ' 00:04:53.203 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.203 11:28:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.203 11:28:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.203 11:28:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.203 11:28:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.204 11:28:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.204 11:28:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:53.204 11:28:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:53.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:53.204 11:28:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:53.204 INFO: launching applications... 00:04:53.204 11:28:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=836389 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:53.204 Waiting for target to run... 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 836389 /var/tmp/spdk_tgt.sock 00:04:53.204 11:28:18 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 836389 ']' 00:04:53.204 11:28:18 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:53.204 11:28:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:53.204 11:28:18 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.204 11:28:18 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:53.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:53.204 11:28:18 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.204 11:28:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.204 [2024-11-15 11:28:18.671498] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:53.204 [2024-11-15 11:28:18.671588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836389 ] 00:04:53.777 [2024-11-15 11:28:19.049479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.777 [2024-11-15 11:28:19.079185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.037 11:28:19 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.037 11:28:19 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:54.037 00:04:54.037 11:28:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:54.037 INFO: shutting down applications... 00:04:54.037 11:28:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 836389 ]] 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 836389 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 836389 00:04:54.037 11:28:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 836389 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:54.609 11:28:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:54.609 SPDK target shutdown done 00:04:54.609 11:28:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:54.609 Success 00:04:54.609 00:04:54.609 real 0m1.585s 00:04:54.609 user 0m1.106s 00:04:54.609 sys 0m0.518s 00:04:54.609 11:28:19 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.609 11:28:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:54.609 ************************************ 00:04:54.609 END TEST json_config_extra_key 00:04:54.609 ************************************ 00:04:54.609 11:28:20 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.609 11:28:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.609 11:28:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.609 11:28:20 -- common/autotest_common.sh@10 -- # set +x 00:04:54.609 ************************************ 00:04:54.609 START TEST alias_rpc 00:04:54.609 ************************************ 00:04:54.609 11:28:20 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:54.870 * Looking for test storage... 00:04:54.870 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:54.870 11:28:20 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:54.870 11:28:20 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:54.870 11:28:20 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:54.870 11:28:20 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:54.870 11:28:20 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:54.871 11:28:20 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:54.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.871 --rc genhtml_branch_coverage=1 00:04:54.871 --rc genhtml_function_coverage=1 00:04:54.871 --rc genhtml_legend=1 00:04:54.871 --rc geninfo_all_blocks=1 00:04:54.871 --rc geninfo_unexecuted_blocks=1 00:04:54.871 00:04:54.871 ' 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:54.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.871 --rc genhtml_branch_coverage=1 00:04:54.871 --rc genhtml_function_coverage=1 00:04:54.871 --rc genhtml_legend=1 00:04:54.871 --rc geninfo_all_blocks=1 00:04:54.871 --rc geninfo_unexecuted_blocks=1 00:04:54.871 00:04:54.871 ' 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:54.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.871 --rc genhtml_branch_coverage=1 00:04:54.871 --rc genhtml_function_coverage=1 00:04:54.871 --rc genhtml_legend=1 00:04:54.871 --rc geninfo_all_blocks=1 00:04:54.871 --rc geninfo_unexecuted_blocks=1 00:04:54.871 00:04:54.871 ' 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:54.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.871 --rc genhtml_branch_coverage=1 00:04:54.871 --rc genhtml_function_coverage=1 00:04:54.871 --rc genhtml_legend=1 00:04:54.871 --rc geninfo_all_blocks=1 00:04:54.871 --rc geninfo_unexecuted_blocks=1 00:04:54.871 00:04:54.871 ' 00:04:54.871 11:28:20 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.871 11:28:20 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=836743 00:04:54.871 11:28:20 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 836743 00:04:54.871 11:28:20 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 836743 ']' 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.871 11:28:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.871 [2024-11-15 11:28:20.318077] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:54.871 [2024-11-15 11:28:20.318137] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid836743 ] 00:04:55.132 [2024-11-15 11:28:20.405026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.132 [2024-11-15 11:28:20.442056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.704 11:28:21 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.704 11:28:21 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:55.704 11:28:21 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:55.962 11:28:21 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 836743 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 836743 ']' 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 836743 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 836743 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 836743' 00:04:55.962 killing process with pid 836743 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@971 -- # kill 836743 00:04:55.962 11:28:21 alias_rpc -- common/autotest_common.sh@976 -- # wait 836743 00:04:56.222 00:04:56.222 real 0m1.513s 00:04:56.222 user 0m1.691s 00:04:56.222 sys 0m0.405s 00:04:56.222 11:28:21 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.222 11:28:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.222 ************************************ 00:04:56.222 END TEST alias_rpc 00:04:56.222 ************************************ 00:04:56.222 11:28:21 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:56.222 11:28:21 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:56.222 11:28:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.222 11:28:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.222 11:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:56.222 ************************************ 00:04:56.222 START TEST spdkcli_tcp 00:04:56.222 ************************************ 00:04:56.222 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:56.484 * Looking for test storage... 00:04:56.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.484 11:28:21 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.484 --rc genhtml_branch_coverage=1 00:04:56.484 --rc genhtml_function_coverage=1 00:04:56.484 --rc genhtml_legend=1 00:04:56.484 --rc geninfo_all_blocks=1 00:04:56.484 --rc geninfo_unexecuted_blocks=1 00:04:56.484 00:04:56.484 ' 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.484 --rc genhtml_branch_coverage=1 00:04:56.484 --rc genhtml_function_coverage=1 00:04:56.484 --rc genhtml_legend=1 00:04:56.484 --rc geninfo_all_blocks=1 00:04:56.484 --rc geninfo_unexecuted_blocks=1 00:04:56.484 00:04:56.484 ' 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.484 --rc genhtml_branch_coverage=1 00:04:56.484 --rc genhtml_function_coverage=1 00:04:56.484 --rc genhtml_legend=1 00:04:56.484 --rc geninfo_all_blocks=1 00:04:56.484 --rc geninfo_unexecuted_blocks=1 00:04:56.484 00:04:56.484 ' 00:04:56.484 11:28:21 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:56.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.484 --rc genhtml_branch_coverage=1 00:04:56.484 --rc genhtml_function_coverage=1 00:04:56.484 --rc genhtml_legend=1 00:04:56.484 --rc geninfo_all_blocks=1 00:04:56.484 --rc geninfo_unexecuted_blocks=1 00:04:56.484 00:04:56.485 ' 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=837088 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 837088 00:04:56.485 11:28:21 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 837088 ']' 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:56.485 11:28:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.485 [2024-11-15 11:28:21.918619] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:56.485 [2024-11-15 11:28:21.918693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837088 ] 00:04:56.746 [2024-11-15 11:28:22.006967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.746 [2024-11-15 11:28:22.043011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.746 [2024-11-15 11:28:22.043013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.317 11:28:22 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.317 11:28:22 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:57.317 11:28:22 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=837252 00:04:57.317 11:28:22 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:57.317 11:28:22 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:57.577 [ 00:04:57.577 "bdev_malloc_delete", 00:04:57.577 "bdev_malloc_create", 00:04:57.577 "bdev_null_resize", 00:04:57.577 "bdev_null_delete", 00:04:57.577 "bdev_null_create", 00:04:57.577 "bdev_nvme_cuse_unregister", 00:04:57.577 "bdev_nvme_cuse_register", 00:04:57.577 "bdev_opal_new_user", 00:04:57.577 "bdev_opal_set_lock_state", 00:04:57.577 "bdev_opal_delete", 00:04:57.577 "bdev_opal_get_info", 00:04:57.577 "bdev_opal_create", 00:04:57.577 "bdev_nvme_opal_revert", 00:04:57.578 "bdev_nvme_opal_init", 00:04:57.578 "bdev_nvme_send_cmd", 00:04:57.578 "bdev_nvme_set_keys", 00:04:57.578 "bdev_nvme_get_path_iostat", 00:04:57.578 "bdev_nvme_get_mdns_discovery_info", 00:04:57.578 "bdev_nvme_stop_mdns_discovery", 00:04:57.578 "bdev_nvme_start_mdns_discovery", 00:04:57.578 "bdev_nvme_set_multipath_policy", 00:04:57.578 "bdev_nvme_set_preferred_path", 00:04:57.578 "bdev_nvme_get_io_paths", 00:04:57.578 "bdev_nvme_remove_error_injection", 00:04:57.578 "bdev_nvme_add_error_injection", 00:04:57.578 "bdev_nvme_get_discovery_info", 00:04:57.578 "bdev_nvme_stop_discovery", 00:04:57.578 "bdev_nvme_start_discovery", 00:04:57.578 "bdev_nvme_get_controller_health_info", 00:04:57.578 "bdev_nvme_disable_controller", 00:04:57.578 "bdev_nvme_enable_controller", 00:04:57.578 "bdev_nvme_reset_controller", 00:04:57.578 "bdev_nvme_get_transport_statistics", 00:04:57.578 "bdev_nvme_apply_firmware", 00:04:57.578 "bdev_nvme_detach_controller", 00:04:57.578 "bdev_nvme_get_controllers", 00:04:57.578 "bdev_nvme_attach_controller", 00:04:57.578 "bdev_nvme_set_hotplug", 00:04:57.578 "bdev_nvme_set_options", 00:04:57.578 "bdev_passthru_delete", 00:04:57.578 "bdev_passthru_create", 00:04:57.578 "bdev_lvol_set_parent_bdev", 00:04:57.578 "bdev_lvol_set_parent", 00:04:57.578 "bdev_lvol_check_shallow_copy", 00:04:57.578 "bdev_lvol_start_shallow_copy", 00:04:57.578 "bdev_lvol_grow_lvstore", 00:04:57.578 "bdev_lvol_get_lvols", 00:04:57.578 "bdev_lvol_get_lvstores", 00:04:57.578 "bdev_lvol_delete", 00:04:57.578 "bdev_lvol_set_read_only", 00:04:57.578 "bdev_lvol_resize", 00:04:57.578 "bdev_lvol_decouple_parent", 00:04:57.578 "bdev_lvol_inflate", 00:04:57.578 "bdev_lvol_rename", 00:04:57.578 "bdev_lvol_clone_bdev", 00:04:57.578 "bdev_lvol_clone", 00:04:57.578 "bdev_lvol_snapshot", 00:04:57.578 "bdev_lvol_create", 00:04:57.578 "bdev_lvol_delete_lvstore", 00:04:57.578 "bdev_lvol_rename_lvstore", 00:04:57.578 "bdev_lvol_create_lvstore", 00:04:57.578 "bdev_raid_set_options", 00:04:57.578 "bdev_raid_remove_base_bdev", 00:04:57.578 "bdev_raid_add_base_bdev", 00:04:57.578 "bdev_raid_delete", 00:04:57.578 "bdev_raid_create", 00:04:57.578 "bdev_raid_get_bdevs", 00:04:57.578 "bdev_error_inject_error", 00:04:57.578 "bdev_error_delete", 00:04:57.578 "bdev_error_create", 00:04:57.578 "bdev_split_delete", 00:04:57.578 "bdev_split_create", 00:04:57.578 "bdev_delay_delete", 00:04:57.578 "bdev_delay_create", 00:04:57.578 "bdev_delay_update_latency", 00:04:57.578 "bdev_zone_block_delete", 00:04:57.578 "bdev_zone_block_create", 00:04:57.578 "blobfs_create", 00:04:57.578 "blobfs_detect", 00:04:57.578 "blobfs_set_cache_size", 00:04:57.578 "bdev_aio_delete", 00:04:57.578 "bdev_aio_rescan", 00:04:57.578 "bdev_aio_create", 00:04:57.578 "bdev_ftl_set_property", 00:04:57.578 "bdev_ftl_get_properties", 00:04:57.578 "bdev_ftl_get_stats", 00:04:57.578 "bdev_ftl_unmap", 00:04:57.578 "bdev_ftl_unload", 00:04:57.578 "bdev_ftl_delete", 00:04:57.578 "bdev_ftl_load", 00:04:57.578 "bdev_ftl_create", 00:04:57.578 "bdev_virtio_attach_controller", 00:04:57.578 "bdev_virtio_scsi_get_devices", 00:04:57.578 "bdev_virtio_detach_controller", 00:04:57.578 "bdev_virtio_blk_set_hotplug", 00:04:57.578 "bdev_iscsi_delete", 00:04:57.578 "bdev_iscsi_create", 00:04:57.578 "bdev_iscsi_set_options", 00:04:57.578 "accel_error_inject_error", 00:04:57.578 "ioat_scan_accel_module", 00:04:57.578 "dsa_scan_accel_module", 00:04:57.578 "iaa_scan_accel_module", 00:04:57.578 "vfu_virtio_create_fs_endpoint", 00:04:57.578 "vfu_virtio_create_scsi_endpoint", 00:04:57.578 "vfu_virtio_scsi_remove_target", 00:04:57.578 "vfu_virtio_scsi_add_target", 00:04:57.578 "vfu_virtio_create_blk_endpoint", 00:04:57.578 "vfu_virtio_delete_endpoint", 00:04:57.578 "keyring_file_remove_key", 00:04:57.578 "keyring_file_add_key", 00:04:57.578 "keyring_linux_set_options", 00:04:57.578 "fsdev_aio_delete", 00:04:57.578 "fsdev_aio_create", 00:04:57.578 "iscsi_get_histogram", 00:04:57.578 "iscsi_enable_histogram", 00:04:57.578 "iscsi_set_options", 00:04:57.578 "iscsi_get_auth_groups", 00:04:57.578 "iscsi_auth_group_remove_secret", 00:04:57.578 "iscsi_auth_group_add_secret", 00:04:57.578 "iscsi_delete_auth_group", 00:04:57.578 "iscsi_create_auth_group", 00:04:57.578 "iscsi_set_discovery_auth", 00:04:57.578 "iscsi_get_options", 00:04:57.578 "iscsi_target_node_request_logout", 00:04:57.578 "iscsi_target_node_set_redirect", 00:04:57.578 "iscsi_target_node_set_auth", 00:04:57.578 "iscsi_target_node_add_lun", 00:04:57.578 "iscsi_get_stats", 00:04:57.578 "iscsi_get_connections", 00:04:57.578 "iscsi_portal_group_set_auth", 00:04:57.578 "iscsi_start_portal_group", 00:04:57.578 "iscsi_delete_portal_group", 00:04:57.578 "iscsi_create_portal_group", 00:04:57.578 "iscsi_get_portal_groups", 00:04:57.578 "iscsi_delete_target_node", 00:04:57.578 "iscsi_target_node_remove_pg_ig_maps", 00:04:57.578 "iscsi_target_node_add_pg_ig_maps", 00:04:57.578 "iscsi_create_target_node", 00:04:57.578 "iscsi_get_target_nodes", 00:04:57.578 "iscsi_delete_initiator_group", 00:04:57.578 "iscsi_initiator_group_remove_initiators", 00:04:57.578 "iscsi_initiator_group_add_initiators", 00:04:57.578 "iscsi_create_initiator_group", 00:04:57.578 "iscsi_get_initiator_groups", 00:04:57.578 "nvmf_set_crdt", 00:04:57.578 "nvmf_set_config", 00:04:57.578 "nvmf_set_max_subsystems", 00:04:57.578 "nvmf_stop_mdns_prr", 00:04:57.578 "nvmf_publish_mdns_prr", 00:04:57.578 "nvmf_subsystem_get_listeners", 00:04:57.578 "nvmf_subsystem_get_qpairs", 00:04:57.578 "nvmf_subsystem_get_controllers", 00:04:57.578 "nvmf_get_stats", 00:04:57.578 "nvmf_get_transports", 00:04:57.578 "nvmf_create_transport", 00:04:57.578 "nvmf_get_targets", 00:04:57.578 "nvmf_delete_target", 00:04:57.578 "nvmf_create_target", 00:04:57.578 "nvmf_subsystem_allow_any_host", 00:04:57.578 "nvmf_subsystem_set_keys", 00:04:57.578 "nvmf_subsystem_remove_host", 00:04:57.578 "nvmf_subsystem_add_host", 00:04:57.578 "nvmf_ns_remove_host", 00:04:57.578 "nvmf_ns_add_host", 00:04:57.578 "nvmf_subsystem_remove_ns", 00:04:57.578 "nvmf_subsystem_set_ns_ana_group", 00:04:57.578 "nvmf_subsystem_add_ns", 00:04:57.578 "nvmf_subsystem_listener_set_ana_state", 00:04:57.578 "nvmf_discovery_get_referrals", 00:04:57.578 "nvmf_discovery_remove_referral", 00:04:57.578 "nvmf_discovery_add_referral", 00:04:57.578 "nvmf_subsystem_remove_listener", 00:04:57.578 "nvmf_subsystem_add_listener", 00:04:57.578 "nvmf_delete_subsystem", 00:04:57.578 "nvmf_create_subsystem", 00:04:57.578 "nvmf_get_subsystems", 00:04:57.578 "env_dpdk_get_mem_stats", 00:04:57.578 "nbd_get_disks", 00:04:57.578 "nbd_stop_disk", 00:04:57.578 "nbd_start_disk", 00:04:57.578 "ublk_recover_disk", 00:04:57.578 "ublk_get_disks", 00:04:57.578 "ublk_stop_disk", 00:04:57.578 "ublk_start_disk", 00:04:57.578 "ublk_destroy_target", 00:04:57.578 "ublk_create_target", 00:04:57.578 "virtio_blk_create_transport", 00:04:57.578 "virtio_blk_get_transports", 00:04:57.578 "vhost_controller_set_coalescing", 00:04:57.578 "vhost_get_controllers", 00:04:57.578 "vhost_delete_controller", 00:04:57.578 "vhost_create_blk_controller", 00:04:57.578 "vhost_scsi_controller_remove_target", 00:04:57.578 "vhost_scsi_controller_add_target", 00:04:57.578 "vhost_start_scsi_controller", 00:04:57.578 "vhost_create_scsi_controller", 00:04:57.578 "thread_set_cpumask", 00:04:57.578 "scheduler_set_options", 00:04:57.578 "framework_get_governor", 00:04:57.578 "framework_get_scheduler", 00:04:57.578 "framework_set_scheduler", 00:04:57.578 "framework_get_reactors", 00:04:57.578 "thread_get_io_channels", 00:04:57.578 "thread_get_pollers", 00:04:57.579 "thread_get_stats", 00:04:57.579 "framework_monitor_context_switch", 00:04:57.579 "spdk_kill_instance", 00:04:57.579 "log_enable_timestamps", 00:04:57.579 "log_get_flags", 00:04:57.579 "log_clear_flag", 00:04:57.579 "log_set_flag", 00:04:57.579 "log_get_level", 00:04:57.579 "log_set_level", 00:04:57.579 "log_get_print_level", 00:04:57.579 "log_set_print_level", 00:04:57.579 "framework_enable_cpumask_locks", 00:04:57.579 "framework_disable_cpumask_locks", 00:04:57.579 "framework_wait_init", 00:04:57.579 "framework_start_init", 00:04:57.579 "scsi_get_devices", 00:04:57.579 "bdev_get_histogram", 00:04:57.579 "bdev_enable_histogram", 00:04:57.579 "bdev_set_qos_limit", 00:04:57.579 "bdev_set_qd_sampling_period", 00:04:57.579 "bdev_get_bdevs", 00:04:57.579 "bdev_reset_iostat", 00:04:57.579 "bdev_get_iostat", 00:04:57.579 "bdev_examine", 00:04:57.579 "bdev_wait_for_examine", 00:04:57.579 "bdev_set_options", 00:04:57.579 "accel_get_stats", 00:04:57.579 "accel_set_options", 00:04:57.579 "accel_set_driver", 00:04:57.579 "accel_crypto_key_destroy", 00:04:57.579 "accel_crypto_keys_get", 00:04:57.579 "accel_crypto_key_create", 00:04:57.579 "accel_assign_opc", 00:04:57.579 "accel_get_module_info", 00:04:57.579 "accel_get_opc_assignments", 00:04:57.579 "vmd_rescan", 00:04:57.579 "vmd_remove_device", 00:04:57.579 "vmd_enable", 00:04:57.579 "sock_get_default_impl", 00:04:57.579 "sock_set_default_impl", 00:04:57.579 "sock_impl_set_options", 00:04:57.579 "sock_impl_get_options", 00:04:57.579 "iobuf_get_stats", 00:04:57.579 "iobuf_set_options", 00:04:57.579 "keyring_get_keys", 00:04:57.579 "vfu_tgt_set_base_path", 00:04:57.579 "framework_get_pci_devices", 00:04:57.579 "framework_get_config", 00:04:57.579 "framework_get_subsystems", 00:04:57.579 "fsdev_set_opts", 00:04:57.579 "fsdev_get_opts", 00:04:57.579 "trace_get_info", 00:04:57.579 "trace_get_tpoint_group_mask", 00:04:57.579 "trace_disable_tpoint_group", 00:04:57.579 "trace_enable_tpoint_group", 00:04:57.579 "trace_clear_tpoint_mask", 00:04:57.579 "trace_set_tpoint_mask", 00:04:57.579 "notify_get_notifications", 00:04:57.579 "notify_get_types", 00:04:57.579 "spdk_get_version", 00:04:57.579 "rpc_get_methods" 00:04:57.579 ] 00:04:57.579 11:28:22 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.579 11:28:22 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:57.579 11:28:22 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 837088 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 837088 ']' 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 837088 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 837088 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 837088' 00:04:57.579 killing process with pid 837088 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 837088 00:04:57.579 11:28:22 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 837088 00:04:57.841 00:04:57.841 real 0m1.497s 00:04:57.841 user 0m2.667s 00:04:57.841 sys 0m0.485s 00:04:57.841 11:28:23 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.841 11:28:23 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.841 ************************************ 00:04:57.841 END TEST spdkcli_tcp 00:04:57.841 ************************************ 00:04:57.841 11:28:23 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.841 11:28:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.841 11:28:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.841 11:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:57.841 ************************************ 00:04:57.841 START TEST dpdk_mem_utility 00:04:57.841 ************************************ 00:04:57.841 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.841 * Looking for test storage... 00:04:57.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:57.841 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.841 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.841 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.103 11:28:23 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.103 --rc genhtml_branch_coverage=1 00:04:58.103 --rc genhtml_function_coverage=1 00:04:58.103 --rc genhtml_legend=1 00:04:58.103 --rc geninfo_all_blocks=1 00:04:58.103 --rc geninfo_unexecuted_blocks=1 00:04:58.103 00:04:58.103 ' 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.103 --rc genhtml_branch_coverage=1 00:04:58.103 --rc genhtml_function_coverage=1 00:04:58.103 --rc genhtml_legend=1 00:04:58.103 --rc geninfo_all_blocks=1 00:04:58.103 --rc geninfo_unexecuted_blocks=1 00:04:58.103 00:04:58.103 ' 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.103 --rc genhtml_branch_coverage=1 00:04:58.103 --rc genhtml_function_coverage=1 00:04:58.103 --rc genhtml_legend=1 00:04:58.103 --rc geninfo_all_blocks=1 00:04:58.103 --rc geninfo_unexecuted_blocks=1 00:04:58.103 00:04:58.103 ' 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.103 --rc genhtml_branch_coverage=1 00:04:58.103 --rc genhtml_function_coverage=1 00:04:58.103 --rc genhtml_legend=1 00:04:58.103 --rc geninfo_all_blocks=1 00:04:58.103 --rc geninfo_unexecuted_blocks=1 00:04:58.103 00:04:58.103 ' 00:04:58.103 11:28:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.103 11:28:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=837431 00:04:58.103 11:28:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 837431 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 837431 ']' 00:04:58.103 11:28:23 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:58.103 11:28:23 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.103 [2024-11-15 11:28:23.480059] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:58.103 [2024-11-15 11:28:23.480136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837431 ] 00:04:58.103 [2024-11-15 11:28:23.569428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.365 [2024-11-15 11:28:23.604204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.937 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.937 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:58.937 11:28:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.937 11:28:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.937 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.937 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.937 { 00:04:58.937 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.937 } 00:04:58.937 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.937 11:28:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:58.937 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:58.937 1 heaps totaling size 818.000000 MiB 00:04:58.937 size: 818.000000 MiB heap id: 0 00:04:58.937 end heaps---------- 00:04:58.937 9 mempools totaling size 603.782043 MiB 00:04:58.937 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.937 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.937 size: 100.555481 MiB name: bdev_io_837431 00:04:58.937 size: 50.003479 MiB name: msgpool_837431 00:04:58.937 size: 36.509338 MiB name: fsdev_io_837431 00:04:58.937 size: 21.763794 MiB name: PDU_Pool 00:04:58.937 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.937 size: 4.133484 MiB name: evtpool_837431 00:04:58.937 size: 0.026123 MiB name: Session_Pool 00:04:58.937 end mempools------- 00:04:58.937 6 memzones totaling size 4.142822 MiB 00:04:58.937 size: 1.000366 MiB name: RG_ring_0_837431 00:04:58.937 size: 1.000366 MiB name: RG_ring_1_837431 00:04:58.937 size: 1.000366 MiB name: RG_ring_4_837431 00:04:58.937 size: 1.000366 MiB name: RG_ring_5_837431 00:04:58.937 size: 0.125366 MiB name: RG_ring_2_837431 00:04:58.937 size: 0.015991 MiB name: RG_ring_3_837431 00:04:58.937 end memzones------- 00:04:58.937 11:28:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.937 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:58.937 list of free elements. size: 10.852478 MiB 00:04:58.937 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:58.937 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:58.937 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:58.937 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:58.937 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:58.937 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:58.937 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:58.937 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:58.937 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:58.937 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:58.937 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:58.937 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:58.937 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:58.937 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:58.937 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:58.937 list of standard malloc elements. size: 199.218628 MiB 00:04:58.937 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:58.937 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:58.937 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:58.937 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:58.937 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:58.937 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.937 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:58.937 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.937 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:58.937 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:58.937 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:58.937 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:58.937 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:58.937 list of memzone associated elements. size: 607.928894 MiB 00:04:58.937 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:58.937 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.937 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:58.937 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.937 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:58.937 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_837431_0 00:04:58.937 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:58.937 associated memzone info: size: 48.002930 MiB name: MP_msgpool_837431_0 00:04:58.937 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:58.937 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_837431_0 00:04:58.937 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:58.937 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.937 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:58.937 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.937 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:58.937 associated memzone info: size: 3.000122 MiB name: MP_evtpool_837431_0 00:04:58.937 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:58.937 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_837431 00:04:58.937 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.937 associated memzone info: size: 1.007996 MiB name: MP_evtpool_837431 00:04:58.937 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:58.937 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.937 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:58.937 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.937 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:58.938 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.938 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:58.938 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.938 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:58.938 associated memzone info: size: 1.000366 MiB name: RG_ring_0_837431 00:04:58.938 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:58.938 associated memzone info: size: 1.000366 MiB name: RG_ring_1_837431 00:04:58.938 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:58.938 associated memzone info: size: 1.000366 MiB name: RG_ring_4_837431 00:04:58.938 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:58.938 associated memzone info: size: 1.000366 MiB name: RG_ring_5_837431 00:04:58.938 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:58.938 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_837431 00:04:58.938 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:58.938 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_837431 00:04:58.938 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:58.938 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.938 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:58.938 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.938 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:58.938 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.938 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:58.938 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_837431 00:04:58.938 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:58.938 associated memzone info: size: 0.125366 MiB name: RG_ring_2_837431 00:04:58.938 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:58.938 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.938 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:58.938 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.938 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:58.938 associated memzone info: size: 0.015991 MiB name: RG_ring_3_837431 00:04:58.938 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:58.938 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.938 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:58.938 associated memzone info: size: 0.000183 MiB name: MP_msgpool_837431 00:04:58.938 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:58.938 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_837431 00:04:58.938 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:58.938 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_837431 00:04:58.938 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:58.938 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.938 11:28:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.938 11:28:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 837431 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 837431 ']' 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 837431 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 837431 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 837431' 00:04:58.938 killing process with pid 837431 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 837431 00:04:58.938 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 837431 00:04:59.199 00:04:59.199 real 0m1.376s 00:04:59.199 user 0m1.420s 00:04:59.199 sys 0m0.417s 00:04:59.199 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.199 11:28:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:59.199 ************************************ 00:04:59.199 END TEST dpdk_mem_utility 00:04:59.199 ************************************ 00:04:59.199 11:28:24 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:59.199 11:28:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.199 11:28:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.199 11:28:24 -- common/autotest_common.sh@10 -- # set +x 00:04:59.199 ************************************ 00:04:59.199 START TEST event 00:04:59.199 ************************************ 00:04:59.199 11:28:24 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:59.459 * Looking for test storage... 00:04:59.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.459 11:28:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.459 11:28:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.459 11:28:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.459 11:28:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.459 11:28:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.459 11:28:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.459 11:28:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.459 11:28:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.459 11:28:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.459 11:28:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.459 11:28:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.459 11:28:24 event -- scripts/common.sh@344 -- # case "$op" in 00:04:59.459 11:28:24 event -- scripts/common.sh@345 -- # : 1 00:04:59.459 11:28:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.459 11:28:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.459 11:28:24 event -- scripts/common.sh@365 -- # decimal 1 00:04:59.459 11:28:24 event -- scripts/common.sh@353 -- # local d=1 00:04:59.459 11:28:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.459 11:28:24 event -- scripts/common.sh@355 -- # echo 1 00:04:59.459 11:28:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.459 11:28:24 event -- scripts/common.sh@366 -- # decimal 2 00:04:59.459 11:28:24 event -- scripts/common.sh@353 -- # local d=2 00:04:59.459 11:28:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.459 11:28:24 event -- scripts/common.sh@355 -- # echo 2 00:04:59.459 11:28:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.459 11:28:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.459 11:28:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.459 11:28:24 event -- scripts/common.sh@368 -- # return 0 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.459 --rc genhtml_branch_coverage=1 00:04:59.459 --rc genhtml_function_coverage=1 00:04:59.459 --rc genhtml_legend=1 00:04:59.459 --rc geninfo_all_blocks=1 00:04:59.459 --rc geninfo_unexecuted_blocks=1 00:04:59.459 00:04:59.459 ' 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.459 --rc genhtml_branch_coverage=1 00:04:59.459 --rc genhtml_function_coverage=1 00:04:59.459 --rc genhtml_legend=1 00:04:59.459 --rc geninfo_all_blocks=1 00:04:59.459 --rc geninfo_unexecuted_blocks=1 00:04:59.459 00:04:59.459 ' 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.459 --rc genhtml_branch_coverage=1 00:04:59.459 --rc genhtml_function_coverage=1 00:04:59.459 --rc genhtml_legend=1 00:04:59.459 --rc geninfo_all_blocks=1 00:04:59.459 --rc geninfo_unexecuted_blocks=1 00:04:59.459 00:04:59.459 ' 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.459 --rc genhtml_branch_coverage=1 00:04:59.459 --rc genhtml_function_coverage=1 00:04:59.459 --rc genhtml_legend=1 00:04:59.459 --rc geninfo_all_blocks=1 00:04:59.459 --rc geninfo_unexecuted_blocks=1 00:04:59.459 00:04:59.459 ' 00:04:59.459 11:28:24 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:59.459 11:28:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.459 11:28:24 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:59.459 11:28:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.459 11:28:24 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.459 ************************************ 00:04:59.459 START TEST event_perf 00:04:59.459 ************************************ 00:04:59.459 11:28:24 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.459 Running I/O for 1 seconds...[2024-11-15 11:28:24.936845] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:04:59.459 [2024-11-15 11:28:24.936951] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid837746 ] 00:04:59.720 [2024-11-15 11:28:25.029856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.720 [2024-11-15 11:28:25.073635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.720 [2024-11-15 11:28:25.073987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.720 [2024-11-15 11:28:25.074148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.720 [2024-11-15 11:28:25.074148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.660 Running I/O for 1 seconds... 00:05:00.660 lcore 0: 177464 00:05:00.660 lcore 1: 177466 00:05:00.660 lcore 2: 177468 00:05:00.660 lcore 3: 177468 00:05:00.660 done. 00:05:00.660 00:05:00.660 real 0m1.187s 00:05:00.660 user 0m4.101s 00:05:00.660 sys 0m0.084s 00:05:00.660 11:28:26 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.660 11:28:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.660 ************************************ 00:05:00.660 END TEST event_perf 00:05:00.660 ************************************ 00:05:00.660 11:28:26 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.660 11:28:26 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:00.660 11:28:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.660 11:28:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.919 ************************************ 00:05:00.919 START TEST event_reactor 00:05:00.919 ************************************ 00:05:00.919 11:28:26 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:00.919 [2024-11-15 11:28:26.199212] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:00.919 [2024-11-15 11:28:26.199292] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838085 ] 00:05:00.919 [2024-11-15 11:28:26.289031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.919 [2024-11-15 11:28:26.327882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.859 test_start 00:05:01.859 oneshot 00:05:01.859 tick 100 00:05:01.859 tick 100 00:05:01.859 tick 250 00:05:01.859 tick 100 00:05:01.859 tick 100 00:05:01.860 tick 250 00:05:01.860 tick 100 00:05:01.860 tick 500 00:05:01.860 tick 100 00:05:01.860 tick 100 00:05:01.860 tick 250 00:05:01.860 tick 100 00:05:01.860 tick 100 00:05:01.860 test_end 00:05:01.860 00:05:01.860 real 0m1.176s 00:05:01.860 user 0m1.088s 00:05:01.860 sys 0m0.084s 00:05:01.860 11:28:27 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.860 11:28:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.860 ************************************ 00:05:01.860 END TEST event_reactor 00:05:01.860 ************************************ 00:05:02.120 11:28:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.120 11:28:27 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:02.120 11:28:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:02.120 11:28:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.120 ************************************ 00:05:02.120 START TEST event_reactor_perf 00:05:02.120 ************************************ 00:05:02.120 11:28:27 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:02.120 [2024-11-15 11:28:27.453209] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:02.120 [2024-11-15 11:28:27.453315] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838435 ] 00:05:02.120 [2024-11-15 11:28:27.541299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.121 [2024-11-15 11:28:27.579255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.504 test_start 00:05:03.504 test_end 00:05:03.504 Performance: 541314 events per second 00:05:03.504 00:05:03.504 real 0m1.174s 00:05:03.504 user 0m1.087s 00:05:03.504 sys 0m0.083s 00:05:03.505 11:28:28 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.505 11:28:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.505 ************************************ 00:05:03.505 END TEST event_reactor_perf 00:05:03.505 ************************************ 00:05:03.505 11:28:28 event -- event/event.sh@49 -- # uname -s 00:05:03.505 11:28:28 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:03.505 11:28:28 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.505 11:28:28 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.505 11:28:28 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.505 11:28:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.505 ************************************ 00:05:03.505 START TEST event_scheduler 00:05:03.505 ************************************ 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:03.505 * Looking for test storage... 00:05:03.505 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.505 11:28:28 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.505 --rc genhtml_branch_coverage=1 00:05:03.505 --rc genhtml_function_coverage=1 00:05:03.505 --rc genhtml_legend=1 00:05:03.505 --rc geninfo_all_blocks=1 00:05:03.505 --rc geninfo_unexecuted_blocks=1 00:05:03.505 00:05:03.505 ' 00:05:03.505 11:28:28 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.505 11:28:28 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=838789 00:05:03.505 11:28:28 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.505 11:28:28 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 838789 00:05:03.505 11:28:28 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 838789 ']' 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.505 11:28:28 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.505 [2024-11-15 11:28:28.943523] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:03.505 [2024-11-15 11:28:28.943605] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid838789 ] 00:05:03.766 [2024-11-15 11:28:29.037471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.766 [2024-11-15 11:28:29.092926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.766 [2024-11-15 11:28:29.093088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.766 [2024-11-15 11:28:29.093250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.766 [2024-11-15 11:28:29.093250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:04.338 11:28:29 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.338 [2024-11-15 11:28:29.767683] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:04.338 [2024-11-15 11:28:29.767702] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:04.338 [2024-11-15 11:28:29.767713] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:04.338 [2024-11-15 11:28:29.767719] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:04.338 [2024-11-15 11:28:29.767725] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.338 11:28:29 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.338 11:28:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 [2024-11-15 11:28:29.836145] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:04.601 11:28:29 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:04.601 11:28:29 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.601 11:28:29 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 ************************************ 00:05:04.601 START TEST scheduler_create_thread 00:05:04.601 ************************************ 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 2 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 3 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 4 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 5 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 6 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 7 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 8 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.601 9 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.601 11:28:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.172 10 00:05:05.172 11:28:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.172 11:28:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:05.172 11:28:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.172 11:28:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.668 11:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.668 11:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:06.668 11:28:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:06.668 11:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.668 11:28:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.253 11:28:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:07.253 11:28:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:07.253 11:28:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:07.253 11:28:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.198 11:28:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.198 11:28:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:08.198 11:28:33 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:08.198 11:28:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.198 11:28:33 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.769 11:28:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.769 00:05:08.769 real 0m4.224s 00:05:08.769 user 0m0.024s 00:05:08.769 sys 0m0.007s 00:05:08.769 11:28:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.769 11:28:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:08.769 ************************************ 00:05:08.769 END TEST scheduler_create_thread 00:05:08.769 ************************************ 00:05:08.769 11:28:34 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:08.769 11:28:34 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 838789 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 838789 ']' 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 838789 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 838789 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 838789' 00:05:08.769 killing process with pid 838789 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 838789 00:05:08.769 11:28:34 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 838789 00:05:09.029 [2024-11-15 11:28:34.377892] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.290 00:05:09.290 real 0m5.848s 00:05:09.290 user 0m12.897s 00:05:09.290 sys 0m0.445s 00:05:09.290 11:28:34 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.290 11:28:34 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.290 ************************************ 00:05:09.290 END TEST event_scheduler 00:05:09.290 ************************************ 00:05:09.290 11:28:34 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.290 11:28:34 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.290 11:28:34 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.290 11:28:34 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.290 11:28:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.290 ************************************ 00:05:09.290 START TEST app_repeat 00:05:09.290 ************************************ 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@19 -- # repeat_pid=839898 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 839898' 00:05:09.290 Process app_repeat pid: 839898 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.290 spdk_app_start Round 0 00:05:09.290 11:28:34 event.app_repeat -- event/event.sh@25 -- # waitforlisten 839898 /var/tmp/spdk-nbd.sock 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 839898 ']' 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.290 11:28:34 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.290 [2024-11-15 11:28:34.656488] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:09.290 [2024-11-15 11:28:34.656587] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid839898 ] 00:05:09.290 [2024-11-15 11:28:34.740587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.290 [2024-11-15 11:28:34.777112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.290 [2024-11-15 11:28:34.777113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.551 11:28:34 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:09.551 11:28:34 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:09.551 11:28:34 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.551 Malloc0 00:05:09.551 11:28:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.812 Malloc1 00:05:09.812 11:28:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.812 11:28:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:10.073 /dev/nbd0 00:05:10.073 11:28:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:10.073 11:28:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.073 1+0 records in 00:05:10.073 1+0 records out 00:05:10.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302486 s, 13.5 MB/s 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:10.073 11:28:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:10.073 11:28:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.073 11:28:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.073 11:28:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:10.335 /dev/nbd1 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:10.335 1+0 records in 00:05:10.335 1+0 records out 00:05:10.335 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000290746 s, 14.1 MB/s 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:10.335 11:28:35 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.335 11:28:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.599 { 00:05:10.599 "nbd_device": "/dev/nbd0", 00:05:10.599 "bdev_name": "Malloc0" 00:05:10.599 }, 00:05:10.599 { 00:05:10.599 "nbd_device": "/dev/nbd1", 00:05:10.599 "bdev_name": "Malloc1" 00:05:10.599 } 00:05:10.599 ]' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.599 { 00:05:10.599 "nbd_device": "/dev/nbd0", 00:05:10.599 "bdev_name": "Malloc0" 00:05:10.599 }, 00:05:10.599 { 00:05:10.599 "nbd_device": "/dev/nbd1", 00:05:10.599 "bdev_name": "Malloc1" 00:05:10.599 } 00:05:10.599 ]' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.599 /dev/nbd1' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.599 /dev/nbd1' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.599 256+0 records in 00:05:10.599 256+0 records out 00:05:10.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124644 s, 84.1 MB/s 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.599 256+0 records in 00:05:10.599 256+0 records out 00:05:10.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119832 s, 87.5 MB/s 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.599 11:28:35 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.599 256+0 records in 00:05:10.599 256+0 records out 00:05:10.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129167 s, 81.2 MB/s 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.599 11:28:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.859 11:28:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.859 11:28:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.860 11:28:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.119 11:28:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:11.379 11:28:36 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:11.379 11:28:36 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.379 11:28:36 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:11.639 [2024-11-15 11:28:36.942346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.639 [2024-11-15 11:28:36.971620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.639 [2024-11-15 11:28:36.971628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.639 [2024-11-15 11:28:37.000759] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.639 [2024-11-15 11:28:37.000788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.937 11:28:39 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:14.937 11:28:39 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:14.937 spdk_app_start Round 1 00:05:14.937 11:28:39 event.app_repeat -- event/event.sh@25 -- # waitforlisten 839898 /var/tmp/spdk-nbd.sock 00:05:14.937 11:28:39 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 839898 ']' 00:05:14.937 11:28:39 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.937 11:28:39 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.937 11:28:39 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.937 11:28:39 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.937 11:28:39 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.937 11:28:40 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:14.937 11:28:40 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:14.937 11:28:40 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.937 Malloc0 00:05:14.937 11:28:40 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.937 Malloc1 00:05:14.937 11:28:40 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.937 11:28:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:15.199 /dev/nbd0 00:05:15.199 11:28:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:15.199 11:28:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.199 1+0 records in 00:05:15.199 1+0 records out 00:05:15.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278277 s, 14.7 MB/s 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:15.199 11:28:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:15.199 11:28:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.199 11:28:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.199 11:28:40 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:15.459 /dev/nbd1 00:05:15.459 11:28:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:15.459 11:28:40 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:15.459 11:28:40 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:15.459 11:28:40 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:15.459 11:28:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:15.459 11:28:40 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:15.460 1+0 records in 00:05:15.460 1+0 records out 00:05:15.460 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000241643 s, 17.0 MB/s 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:15.460 11:28:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:15.460 11:28:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:15.460 11:28:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:15.460 11:28:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.460 11:28:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.460 11:28:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.721 { 00:05:15.721 "nbd_device": "/dev/nbd0", 00:05:15.721 "bdev_name": "Malloc0" 00:05:15.721 }, 00:05:15.721 { 00:05:15.721 "nbd_device": "/dev/nbd1", 00:05:15.721 "bdev_name": "Malloc1" 00:05:15.721 } 00:05:15.721 ]' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.721 { 00:05:15.721 "nbd_device": "/dev/nbd0", 00:05:15.721 "bdev_name": "Malloc0" 00:05:15.721 }, 00:05:15.721 { 00:05:15.721 "nbd_device": "/dev/nbd1", 00:05:15.721 "bdev_name": "Malloc1" 00:05:15.721 } 00:05:15.721 ]' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.721 /dev/nbd1' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.721 /dev/nbd1' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.721 256+0 records in 00:05:15.721 256+0 records out 00:05:15.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119505 s, 87.7 MB/s 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.721 256+0 records in 00:05:15.721 256+0 records out 00:05:15.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121447 s, 86.3 MB/s 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.721 256+0 records in 00:05:15.721 256+0 records out 00:05:15.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134402 s, 78.0 MB/s 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.721 11:28:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.983 11:28:41 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.244 11:28:41 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.245 11:28:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:16.506 11:28:41 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:16.506 11:28:41 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:16.767 11:28:42 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:16.767 [2024-11-15 11:28:42.099068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.767 [2024-11-15 11:28:42.127999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.767 [2024-11-15 11:28:42.127999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.767 [2024-11-15 11:28:42.157758] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.767 [2024-11-15 11:28:42.157788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:20.065 11:28:45 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:20.065 11:28:45 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:20.065 spdk_app_start Round 2 00:05:20.065 11:28:45 event.app_repeat -- event/event.sh@25 -- # waitforlisten 839898 /var/tmp/spdk-nbd.sock 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 839898 ']' 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:20.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.065 11:28:45 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:20.066 11:28:45 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.066 Malloc0 00:05:20.066 11:28:45 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:20.066 Malloc1 00:05:20.327 11:28:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:20.327 /dev/nbd0 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.327 1+0 records in 00:05:20.327 1+0 records out 00:05:20.327 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000281617 s, 14.5 MB/s 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:20.327 11:28:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.327 11:28:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.589 /dev/nbd1 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.589 1+0 records in 00:05:20.589 1+0 records out 00:05:20.589 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283124 s, 14.5 MB/s 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:20.589 11:28:46 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.589 11:28:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.851 { 00:05:20.851 "nbd_device": "/dev/nbd0", 00:05:20.851 "bdev_name": "Malloc0" 00:05:20.851 }, 00:05:20.851 { 00:05:20.851 "nbd_device": "/dev/nbd1", 00:05:20.851 "bdev_name": "Malloc1" 00:05:20.851 } 00:05:20.851 ]' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.851 { 00:05:20.851 "nbd_device": "/dev/nbd0", 00:05:20.851 "bdev_name": "Malloc0" 00:05:20.851 }, 00:05:20.851 { 00:05:20.851 "nbd_device": "/dev/nbd1", 00:05:20.851 "bdev_name": "Malloc1" 00:05:20.851 } 00:05:20.851 ]' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.851 /dev/nbd1' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.851 /dev/nbd1' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.851 256+0 records in 00:05:20.851 256+0 records out 00:05:20.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123583 s, 84.8 MB/s 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.851 256+0 records in 00:05:20.851 256+0 records out 00:05:20.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120917 s, 86.7 MB/s 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.851 256+0 records in 00:05:20.851 256+0 records out 00:05:20.851 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143004 s, 73.3 MB/s 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.851 11:28:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.113 11:28:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.374 11:28:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.635 11:28:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.635 11:28:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.897 11:28:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.897 [2024-11-15 11:28:47.267718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.897 [2024-11-15 11:28:47.296007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.897 [2024-11-15 11:28:47.296007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.897 [2024-11-15 11:28:47.325210] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.897 [2024-11-15 11:28:47.325240] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.196 11:28:50 event.app_repeat -- event/event.sh@38 -- # waitforlisten 839898 /var/tmp/spdk-nbd.sock 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 839898 ']' 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:25.196 11:28:50 event.app_repeat -- event/event.sh@39 -- # killprocess 839898 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 839898 ']' 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 839898 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 839898 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 839898' 00:05:25.196 killing process with pid 839898 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@971 -- # kill 839898 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@976 -- # wait 839898 00:05:25.196 spdk_app_start is called in Round 0. 00:05:25.196 Shutdown signal received, stop current app iteration 00:05:25.196 Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 reinitialization... 00:05:25.196 spdk_app_start is called in Round 1. 00:05:25.196 Shutdown signal received, stop current app iteration 00:05:25.196 Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 reinitialization... 00:05:25.196 spdk_app_start is called in Round 2. 00:05:25.196 Shutdown signal received, stop current app iteration 00:05:25.196 Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 reinitialization... 00:05:25.196 spdk_app_start is called in Round 3. 00:05:25.196 Shutdown signal received, stop current app iteration 00:05:25.196 11:28:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.196 11:28:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:25.196 00:05:25.196 real 0m15.895s 00:05:25.196 user 0m34.931s 00:05:25.196 sys 0m2.290s 00:05:25.196 11:28:50 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.197 11:28:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.197 ************************************ 00:05:25.197 END TEST app_repeat 00:05:25.197 ************************************ 00:05:25.197 11:28:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.197 11:28:50 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:25.197 11:28:50 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.197 11:28:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.197 11:28:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.197 ************************************ 00:05:25.197 START TEST cpu_locks 00:05:25.197 ************************************ 00:05:25.197 11:28:50 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:25.197 * Looking for test storage... 00:05:25.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:25.197 11:28:50 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.458 11:28:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.458 11:28:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.458 11:28:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.458 11:28:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:25.459 11:28:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.459 11:28:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.459 11:28:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.459 11:28:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.459 --rc genhtml_branch_coverage=1 00:05:25.459 --rc genhtml_function_coverage=1 00:05:25.459 --rc genhtml_legend=1 00:05:25.459 --rc geninfo_all_blocks=1 00:05:25.459 --rc geninfo_unexecuted_blocks=1 00:05:25.459 00:05:25.459 ' 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.459 --rc genhtml_branch_coverage=1 00:05:25.459 --rc genhtml_function_coverage=1 00:05:25.459 --rc genhtml_legend=1 00:05:25.459 --rc geninfo_all_blocks=1 00:05:25.459 --rc geninfo_unexecuted_blocks=1 00:05:25.459 00:05:25.459 ' 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.459 --rc genhtml_branch_coverage=1 00:05:25.459 --rc genhtml_function_coverage=1 00:05:25.459 --rc genhtml_legend=1 00:05:25.459 --rc geninfo_all_blocks=1 00:05:25.459 --rc geninfo_unexecuted_blocks=1 00:05:25.459 00:05:25.459 ' 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.459 --rc genhtml_branch_coverage=1 00:05:25.459 --rc genhtml_function_coverage=1 00:05:25.459 --rc genhtml_legend=1 00:05:25.459 --rc geninfo_all_blocks=1 00:05:25.459 --rc geninfo_unexecuted_blocks=1 00:05:25.459 00:05:25.459 ' 00:05:25.459 11:28:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.459 11:28:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.459 11:28:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.459 11:28:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.459 11:28:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.459 ************************************ 00:05:25.459 START TEST default_locks 00:05:25.459 ************************************ 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=843367 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 843367 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 843367 ']' 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.459 11:28:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.459 [2024-11-15 11:28:50.896882] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:25.459 [2024-11-15 11:28:50.896943] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843367 ] 00:05:25.720 [2024-11-15 11:28:50.983963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.720 [2024-11-15 11:28:51.019150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.290 11:28:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.290 11:28:51 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:26.290 11:28:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 843367 00:05:26.290 11:28:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 843367 00:05:26.290 11:28:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.863 lslocks: write error 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 843367 ']' 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 843367' 00:05:26.863 killing process with pid 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 843367 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 843367 ']' 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (843367) - No such process 00:05:26.863 ERROR: process (pid: 843367) is no longer running 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.863 00:05:26.863 real 0m1.524s 00:05:26.863 user 0m1.648s 00:05:26.863 sys 0m0.531s 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.863 11:28:52 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.864 ************************************ 00:05:26.864 END TEST default_locks 00:05:26.864 ************************************ 00:05:27.124 11:28:52 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:27.125 11:28:52 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.125 11:28:52 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.125 11:28:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.125 ************************************ 00:05:27.125 START TEST default_locks_via_rpc 00:05:27.125 ************************************ 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=843680 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 843680 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 843680 ']' 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.125 11:28:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.125 [2024-11-15 11:28:52.489207] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:27.125 [2024-11-15 11:28:52.489267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid843680 ] 00:05:27.125 [2024-11-15 11:28:52.578065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.125 [2024-11-15 11:28:52.613266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 843680 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 843680 00:05:28.067 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 843680 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 843680 ']' 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 843680 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 843680 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 843680' 00:05:28.328 killing process with pid 843680 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 843680 00:05:28.328 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 843680 00:05:28.588 00:05:28.588 real 0m1.546s 00:05:28.588 user 0m1.670s 00:05:28.588 sys 0m0.534s 00:05:28.588 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.588 11:28:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.588 ************************************ 00:05:28.588 END TEST default_locks_via_rpc 00:05:28.588 ************************************ 00:05:28.588 11:28:54 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:28.588 11:28:54 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.588 11:28:54 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.588 11:28:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.588 ************************************ 00:05:28.588 START TEST non_locking_app_on_locked_coremask 00:05:28.588 ************************************ 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=844002 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 844002 /var/tmp/spdk.sock 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 844002 ']' 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.588 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.848 [2024-11-15 11:28:54.110448] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:28.848 [2024-11-15 11:28:54.110501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844002 ] 00:05:28.848 [2024-11-15 11:28:54.197938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.848 [2024-11-15 11:28:54.231557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=844236 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 844236 /var/tmp/spdk2.sock 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 844236 ']' 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:29.424 11:28:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.687 [2024-11-15 11:28:54.952750] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:29.687 [2024-11-15 11:28:54.952803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844236 ] 00:05:29.687 [2024-11-15 11:28:55.039586] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:29.687 [2024-11-15 11:28:55.039606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.687 [2024-11-15 11:28:55.096794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.257 11:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:30.257 11:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:30.257 11:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 844002 00:05:30.257 11:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 844002 00:05:30.257 11:28:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:31.198 lslocks: write error 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 844002 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 844002 ']' 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 844002 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 844002 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 844002' 00:05:31.199 killing process with pid 844002 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 844002 00:05:31.199 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 844002 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 844236 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 844236 ']' 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 844236 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 844236 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 844236' 00:05:31.460 killing process with pid 844236 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 844236 00:05:31.460 11:28:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 844236 00:05:31.722 00:05:31.722 real 0m3.057s 00:05:31.722 user 0m3.401s 00:05:31.722 sys 0m0.944s 00:05:31.722 11:28:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.722 11:28:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.722 ************************************ 00:05:31.722 END TEST non_locking_app_on_locked_coremask 00:05:31.722 ************************************ 00:05:31.722 11:28:57 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:31.722 11:28:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.722 11:28:57 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.722 11:28:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.722 ************************************ 00:05:31.722 START TEST locking_app_on_unlocked_coremask 00:05:31.722 ************************************ 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=844615 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 844615 /var/tmp/spdk.sock 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 844615 ']' 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.722 11:28:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.983 [2024-11-15 11:28:57.246259] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:31.983 [2024-11-15 11:28:57.246323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844615 ] 00:05:31.983 [2024-11-15 11:28:57.333715] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.983 [2024-11-15 11:28:57.333748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.983 [2024-11-15 11:28:57.373684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=844946 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 844946 /var/tmp/spdk2.sock 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 844946 ']' 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.555 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.816 [2024-11-15 11:28:58.103399] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:32.816 [2024-11-15 11:28:58.103454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844946 ] 00:05:32.816 [2024-11-15 11:28:58.193819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.816 [2024-11-15 11:28:58.252095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.387 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.387 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:33.647 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 844946 00:05:33.647 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 844946 00:05:33.647 11:28:58 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.223 lslocks: write error 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 844615 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 844615 ']' 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 844615 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 844615 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 844615' 00:05:34.223 killing process with pid 844615 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 844615 00:05:34.223 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 844615 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 844946 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 844946 ']' 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 844946 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 844946 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 844946' 00:05:34.486 killing process with pid 844946 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 844946 00:05:34.486 11:28:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 844946 00:05:34.747 00:05:34.747 real 0m2.974s 00:05:34.747 user 0m3.296s 00:05:34.747 sys 0m0.929s 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.747 ************************************ 00:05:34.747 END TEST locking_app_on_unlocked_coremask 00:05:34.747 ************************************ 00:05:34.747 11:29:00 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:34.747 11:29:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.747 11:29:00 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.747 11:29:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.747 ************************************ 00:05:34.747 START TEST locking_app_on_locked_coremask 00:05:34.747 ************************************ 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=845321 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 845321 /var/tmp/spdk.sock 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 845321 ']' 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:34.747 11:29:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.007 [2024-11-15 11:29:00.294331] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:35.007 [2024-11-15 11:29:00.294384] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845321 ] 00:05:35.007 [2024-11-15 11:29:00.379605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.007 [2024-11-15 11:29:00.412991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.579 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.579 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:35.579 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=845552 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 845552 /var/tmp/spdk2.sock 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 845552 /var/tmp/spdk2.sock 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 845552 /var/tmp/spdk2.sock 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 845552 ']' 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.839 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.839 [2024-11-15 11:29:01.134249] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:35.839 [2024-11-15 11:29:01.134303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845552 ] 00:05:35.839 [2024-11-15 11:29:01.219273] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 845321 has claimed it. 00:05:35.839 [2024-11-15 11:29:01.219304] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (845552) - No such process 00:05:36.412 ERROR: process (pid: 845552) is no longer running 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 845321 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 845321 00:05:36.412 11:29:01 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.983 lslocks: write error 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 845321 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 845321 ']' 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 845321 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 845321 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 845321' 00:05:36.983 killing process with pid 845321 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 845321 00:05:36.983 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 845321 00:05:37.244 00:05:37.244 real 0m2.284s 00:05:37.244 user 0m2.582s 00:05:37.244 sys 0m0.650s 00:05:37.244 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.244 11:29:02 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 ************************************ 00:05:37.244 END TEST locking_app_on_locked_coremask 00:05:37.244 ************************************ 00:05:37.244 11:29:02 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:37.244 11:29:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.244 11:29:02 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.244 11:29:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.244 ************************************ 00:05:37.244 START TEST locking_overlapped_coremask 00:05:37.244 ************************************ 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=845835 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 845835 /var/tmp/spdk.sock 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 845835 ']' 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.244 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.245 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.245 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.245 11:29:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.245 [2024-11-15 11:29:02.655245] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:37.245 [2024-11-15 11:29:02.655302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid845835 ] 00:05:37.506 [2024-11-15 11:29:02.741990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.506 [2024-11-15 11:29:02.777384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.506 [2024-11-15 11:29:02.777534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.506 [2024-11-15 11:29:02.777535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=846034 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 846034 /var/tmp/spdk2.sock 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 846034 /var/tmp/spdk2.sock 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 846034 /var/tmp/spdk2.sock 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 846034 ']' 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.078 11:29:03 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.078 [2024-11-15 11:29:03.508083] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:38.078 [2024-11-15 11:29:03.508136] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846034 ] 00:05:38.338 [2024-11-15 11:29:03.618678] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 845835 has claimed it. 00:05:38.338 [2024-11-15 11:29:03.618717] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (846034) - No such process 00:05:38.910 ERROR: process (pid: 846034) is no longer running 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 845835 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 845835 ']' 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 845835 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 845835 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 845835' 00:05:38.910 killing process with pid 845835 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 845835 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 845835 00:05:38.910 00:05:38.910 real 0m1.781s 00:05:38.910 user 0m5.140s 00:05:38.910 sys 0m0.398s 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.910 11:29:04 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.910 ************************************ 00:05:38.910 END TEST locking_overlapped_coremask 00:05:38.910 ************************************ 00:05:39.171 11:29:04 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:39.171 11:29:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.171 11:29:04 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.171 11:29:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:39.171 ************************************ 00:05:39.171 START TEST locking_overlapped_coremask_via_rpc 00:05:39.171 ************************************ 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=846287 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 846287 /var/tmp/spdk.sock 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 846287 ']' 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.171 11:29:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.171 [2024-11-15 11:29:04.510261] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:39.171 [2024-11-15 11:29:04.510321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846287 ] 00:05:39.171 [2024-11-15 11:29:04.600277] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.171 [2024-11-15 11:29:04.600306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.171 [2024-11-15 11:29:04.636131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.171 [2024-11-15 11:29:04.636259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.171 [2024-11-15 11:29:04.636261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=846406 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 846406 /var/tmp/spdk2.sock 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 846406 ']' 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.111 11:29:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.111 [2024-11-15 11:29:05.357378] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:40.111 [2024-11-15 11:29:05.357432] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846406 ] 00:05:40.111 [2024-11-15 11:29:05.469106] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.111 [2024-11-15 11:29:05.469138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:40.111 [2024-11-15 11:29:05.546850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.111 [2024-11-15 11:29:05.547006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.111 [2024-11-15 11:29:05.547007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.682 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.682 [2024-11-15 11:29:06.171649] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 846287 has claimed it. 00:05:40.943 request: 00:05:40.943 { 00:05:40.943 "method": "framework_enable_cpumask_locks", 00:05:40.943 "req_id": 1 00:05:40.943 } 00:05:40.943 Got JSON-RPC error response 00:05:40.943 response: 00:05:40.943 { 00:05:40.943 "code": -32603, 00:05:40.943 "message": "Failed to claim CPU core: 2" 00:05:40.943 } 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 846287 /var/tmp/spdk.sock 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 846287 ']' 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 846406 /var/tmp/spdk2.sock 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 846406 ']' 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.943 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.204 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:41.204 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:41.204 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:41.204 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:41.204 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:41.205 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:41.205 00:05:41.205 real 0m2.084s 00:05:41.205 user 0m0.860s 00:05:41.205 sys 0m0.154s 00:05:41.205 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.205 11:29:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.205 ************************************ 00:05:41.205 END TEST locking_overlapped_coremask_via_rpc 00:05:41.205 ************************************ 00:05:41.205 11:29:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:41.205 11:29:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 846287 ]] 00:05:41.205 11:29:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 846287 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 846287 ']' 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 846287 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 846287 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 846287' 00:05:41.205 killing process with pid 846287 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 846287 00:05:41.205 11:29:06 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 846287 00:05:41.465 11:29:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 846406 ]] 00:05:41.465 11:29:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 846406 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 846406 ']' 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 846406 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 846406 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 846406' 00:05:41.465 killing process with pid 846406 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 846406 00:05:41.465 11:29:06 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 846406 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 846287 ]] 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 846287 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 846287 ']' 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 846287 00:05:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (846287) - No such process 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 846287 is not found' 00:05:41.727 Process with pid 846287 is not found 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 846406 ]] 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 846406 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 846406 ']' 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 846406 00:05:41.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (846406) - No such process 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 846406 is not found' 00:05:41.727 Process with pid 846406 is not found 00:05:41.727 11:29:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.727 00:05:41.727 real 0m16.509s 00:05:41.727 user 0m28.593s 00:05:41.727 sys 0m5.112s 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.727 11:29:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 ************************************ 00:05:41.727 END TEST cpu_locks 00:05:41.727 ************************************ 00:05:41.727 00:05:41.727 real 0m42.467s 00:05:41.727 user 1m22.986s 00:05:41.727 sys 0m8.521s 00:05:41.727 11:29:07 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.727 11:29:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 ************************************ 00:05:41.727 END TEST event 00:05:41.727 ************************************ 00:05:41.727 11:29:07 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.727 11:29:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:41.727 11:29:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.727 11:29:07 -- common/autotest_common.sh@10 -- # set +x 00:05:41.727 ************************************ 00:05:41.727 START TEST thread 00:05:41.727 ************************************ 00:05:41.727 11:29:07 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:41.989 * Looking for test storage... 00:05:41.989 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.989 11:29:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.989 11:29:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.989 11:29:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.989 11:29:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.989 11:29:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.989 11:29:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.989 11:29:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.989 11:29:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.989 11:29:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.989 11:29:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.989 11:29:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.989 11:29:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.989 11:29:07 thread -- scripts/common.sh@345 -- # : 1 00:05:41.989 11:29:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.989 11:29:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.989 11:29:07 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.989 11:29:07 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.989 11:29:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.989 11:29:07 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.989 11:29:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.989 11:29:07 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.989 11:29:07 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.989 11:29:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.989 11:29:07 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.989 11:29:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.989 11:29:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.989 11:29:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.989 11:29:07 thread -- scripts/common.sh@368 -- # return 0 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.989 --rc geninfo_all_blocks=1 00:05:41.989 --rc geninfo_unexecuted_blocks=1 00:05:41.989 00:05:41.989 ' 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.989 --rc geninfo_all_blocks=1 00:05:41.989 --rc geninfo_unexecuted_blocks=1 00:05:41.989 00:05:41.989 ' 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.989 --rc geninfo_all_blocks=1 00:05:41.989 --rc geninfo_unexecuted_blocks=1 00:05:41.989 00:05:41.989 ' 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.989 --rc genhtml_branch_coverage=1 00:05:41.989 --rc genhtml_function_coverage=1 00:05:41.989 --rc genhtml_legend=1 00:05:41.989 --rc geninfo_all_blocks=1 00:05:41.989 --rc geninfo_unexecuted_blocks=1 00:05:41.989 00:05:41.989 ' 00:05:41.989 11:29:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.989 11:29:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.989 ************************************ 00:05:41.989 START TEST thread_poller_perf 00:05:41.989 ************************************ 00:05:41.989 11:29:07 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.989 [2024-11-15 11:29:07.478098] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:41.989 [2024-11-15 11:29:07.478196] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid846877 ] 00:05:42.250 [2024-11-15 11:29:07.565609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.250 [2024-11-15 11:29:07.598079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.250 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:43.191 [2024-11-15T10:29:08.689Z] ====================================== 00:05:43.191 [2024-11-15T10:29:08.689Z] busy:2407507968 (cyc) 00:05:43.191 [2024-11-15T10:29:08.689Z] total_run_count: 418000 00:05:43.191 [2024-11-15T10:29:08.689Z] tsc_hz: 2400000000 (cyc) 00:05:43.191 [2024-11-15T10:29:08.689Z] ====================================== 00:05:43.191 [2024-11-15T10:29:08.689Z] poller_cost: 5759 (cyc), 2399 (nsec) 00:05:43.191 00:05:43.191 real 0m1.176s 00:05:43.191 user 0m1.090s 00:05:43.191 sys 0m0.082s 00:05:43.191 11:29:08 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.192 11:29:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:43.192 ************************************ 00:05:43.192 END TEST thread_poller_perf 00:05:43.192 ************************************ 00:05:43.192 11:29:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.192 11:29:08 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:43.192 11:29:08 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.192 11:29:08 thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.451 ************************************ 00:05:43.451 START TEST thread_poller_perf 00:05:43.451 ************************************ 00:05:43.451 11:29:08 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:43.451 [2024-11-15 11:29:08.732120] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:43.451 [2024-11-15 11:29:08.732220] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847206 ] 00:05:43.451 [2024-11-15 11:29:08.820154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.451 [2024-11-15 11:29:08.849400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.451 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:44.390 [2024-11-15T10:29:09.888Z] ====================================== 00:05:44.390 [2024-11-15T10:29:09.888Z] busy:2401353332 (cyc) 00:05:44.390 [2024-11-15T10:29:09.888Z] total_run_count: 5562000 00:05:44.390 [2024-11-15T10:29:09.888Z] tsc_hz: 2400000000 (cyc) 00:05:44.390 [2024-11-15T10:29:09.888Z] ====================================== 00:05:44.390 [2024-11-15T10:29:09.888Z] poller_cost: 431 (cyc), 179 (nsec) 00:05:44.390 00:05:44.390 real 0m1.166s 00:05:44.390 user 0m1.083s 00:05:44.390 sys 0m0.079s 00:05:44.390 11:29:09 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.390 11:29:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.390 ************************************ 00:05:44.390 END TEST thread_poller_perf 00:05:44.390 ************************************ 00:05:44.650 11:29:09 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.650 00:05:44.650 real 0m2.702s 00:05:44.650 user 0m2.340s 00:05:44.650 sys 0m0.376s 00:05:44.650 11:29:09 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.650 11:29:09 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.650 ************************************ 00:05:44.650 END TEST thread 00:05:44.650 ************************************ 00:05:44.650 11:29:09 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.650 11:29:09 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.650 11:29:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.650 11:29:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.650 11:29:09 -- common/autotest_common.sh@10 -- # set +x 00:05:44.650 ************************************ 00:05:44.650 START TEST app_cmdline 00:05:44.650 ************************************ 00:05:44.650 11:29:09 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:44.650 * Looking for test storage... 00:05:44.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:44.650 11:29:10 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.651 11:29:10 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.651 11:29:10 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.911 11:29:10 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.911 11:29:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.912 11:29:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.912 --rc genhtml_branch_coverage=1 00:05:44.912 --rc genhtml_function_coverage=1 00:05:44.912 --rc genhtml_legend=1 00:05:44.912 --rc geninfo_all_blocks=1 00:05:44.912 --rc geninfo_unexecuted_blocks=1 00:05:44.912 00:05:44.912 ' 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.912 --rc genhtml_branch_coverage=1 00:05:44.912 --rc genhtml_function_coverage=1 00:05:44.912 --rc genhtml_legend=1 00:05:44.912 --rc geninfo_all_blocks=1 00:05:44.912 --rc geninfo_unexecuted_blocks=1 00:05:44.912 00:05:44.912 ' 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.912 --rc genhtml_branch_coverage=1 00:05:44.912 --rc genhtml_function_coverage=1 00:05:44.912 --rc genhtml_legend=1 00:05:44.912 --rc geninfo_all_blocks=1 00:05:44.912 --rc geninfo_unexecuted_blocks=1 00:05:44.912 00:05:44.912 ' 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.912 --rc genhtml_branch_coverage=1 00:05:44.912 --rc genhtml_function_coverage=1 00:05:44.912 --rc genhtml_legend=1 00:05:44.912 --rc geninfo_all_blocks=1 00:05:44.912 --rc geninfo_unexecuted_blocks=1 00:05:44.912 00:05:44.912 ' 00:05:44.912 11:29:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.912 11:29:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=847610 00:05:44.912 11:29:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 847610 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 847610 ']' 00:05:44.912 11:29:10 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.912 11:29:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.912 [2024-11-15 11:29:10.251349] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:44.912 [2024-11-15 11:29:10.251426] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid847610 ] 00:05:44.912 [2024-11-15 11:29:10.340754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.912 [2024-11-15 11:29:10.375745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:45.854 { 00:05:45.854 "version": "SPDK v25.01-pre git sha1 dec6d3843", 00:05:45.854 "fields": { 00:05:45.854 "major": 25, 00:05:45.854 "minor": 1, 00:05:45.854 "patch": 0, 00:05:45.854 "suffix": "-pre", 00:05:45.854 "commit": "dec6d3843" 00:05:45.854 } 00:05:45.854 } 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:45.854 11:29:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:45.854 11:29:11 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.115 request: 00:05:46.115 { 00:05:46.115 "method": "env_dpdk_get_mem_stats", 00:05:46.115 "req_id": 1 00:05:46.116 } 00:05:46.116 Got JSON-RPC error response 00:05:46.116 response: 00:05:46.116 { 00:05:46.116 "code": -32601, 00:05:46.116 "message": "Method not found" 00:05:46.116 } 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.116 11:29:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 847610 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 847610 ']' 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 847610 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 847610 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 847610' 00:05:46.116 killing process with pid 847610 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@971 -- # kill 847610 00:05:46.116 11:29:11 app_cmdline -- common/autotest_common.sh@976 -- # wait 847610 00:05:46.377 00:05:46.377 real 0m1.698s 00:05:46.377 user 0m2.037s 00:05:46.377 sys 0m0.460s 00:05:46.377 11:29:11 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.377 11:29:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.377 ************************************ 00:05:46.377 END TEST app_cmdline 00:05:46.377 ************************************ 00:05:46.377 11:29:11 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:46.377 11:29:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:46.377 11:29:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.377 11:29:11 -- common/autotest_common.sh@10 -- # set +x 00:05:46.377 ************************************ 00:05:46.377 START TEST version 00:05:46.377 ************************************ 00:05:46.377 11:29:11 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:46.377 * Looking for test storage... 00:05:46.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:46.377 11:29:11 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.377 11:29:11 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.377 11:29:11 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.639 11:29:11 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.639 11:29:11 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.639 11:29:11 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.639 11:29:11 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.639 11:29:11 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.639 11:29:11 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.639 11:29:11 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.639 11:29:11 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.639 11:29:11 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.639 11:29:11 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.639 11:29:11 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.639 11:29:11 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.639 11:29:11 version -- scripts/common.sh@344 -- # case "$op" in 00:05:46.639 11:29:11 version -- scripts/common.sh@345 -- # : 1 00:05:46.639 11:29:11 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.639 11:29:11 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.639 11:29:11 version -- scripts/common.sh@365 -- # decimal 1 00:05:46.639 11:29:11 version -- scripts/common.sh@353 -- # local d=1 00:05:46.639 11:29:11 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.639 11:29:11 version -- scripts/common.sh@355 -- # echo 1 00:05:46.639 11:29:11 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.639 11:29:11 version -- scripts/common.sh@366 -- # decimal 2 00:05:46.639 11:29:11 version -- scripts/common.sh@353 -- # local d=2 00:05:46.639 11:29:11 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.639 11:29:11 version -- scripts/common.sh@355 -- # echo 2 00:05:46.639 11:29:11 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.639 11:29:11 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.639 11:29:11 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.639 11:29:11 version -- scripts/common.sh@368 -- # return 0 00:05:46.639 11:29:11 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.639 11:29:11 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.639 --rc genhtml_branch_coverage=1 00:05:46.639 --rc genhtml_function_coverage=1 00:05:46.639 --rc genhtml_legend=1 00:05:46.639 --rc geninfo_all_blocks=1 00:05:46.639 --rc geninfo_unexecuted_blocks=1 00:05:46.639 00:05:46.639 ' 00:05:46.639 11:29:11 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.639 --rc genhtml_branch_coverage=1 00:05:46.639 --rc genhtml_function_coverage=1 00:05:46.639 --rc genhtml_legend=1 00:05:46.639 --rc geninfo_all_blocks=1 00:05:46.639 --rc geninfo_unexecuted_blocks=1 00:05:46.639 00:05:46.639 ' 00:05:46.639 11:29:11 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.639 --rc genhtml_branch_coverage=1 00:05:46.639 --rc genhtml_function_coverage=1 00:05:46.639 --rc genhtml_legend=1 00:05:46.639 --rc geninfo_all_blocks=1 00:05:46.639 --rc geninfo_unexecuted_blocks=1 00:05:46.639 00:05:46.639 ' 00:05:46.639 11:29:11 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.639 --rc genhtml_branch_coverage=1 00:05:46.639 --rc genhtml_function_coverage=1 00:05:46.639 --rc genhtml_legend=1 00:05:46.639 --rc geninfo_all_blocks=1 00:05:46.639 --rc geninfo_unexecuted_blocks=1 00:05:46.639 00:05:46.639 ' 00:05:46.639 11:29:11 version -- app/version.sh@17 -- # get_header_version major 00:05:46.639 11:29:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # cut -f2 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.639 11:29:11 version -- app/version.sh@17 -- # major=25 00:05:46.639 11:29:11 version -- app/version.sh@18 -- # get_header_version minor 00:05:46.639 11:29:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # cut -f2 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.639 11:29:11 version -- app/version.sh@18 -- # minor=1 00:05:46.639 11:29:11 version -- app/version.sh@19 -- # get_header_version patch 00:05:46.639 11:29:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # cut -f2 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.639 11:29:11 version -- app/version.sh@19 -- # patch=0 00:05:46.639 11:29:11 version -- app/version.sh@20 -- # get_header_version suffix 00:05:46.639 11:29:11 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:46.639 11:29:11 version -- app/version.sh@14 -- # cut -f2 00:05:46.639 11:29:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:46.639 11:29:12 version -- app/version.sh@20 -- # suffix=-pre 00:05:46.639 11:29:12 version -- app/version.sh@22 -- # version=25.1 00:05:46.639 11:29:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:46.639 11:29:12 version -- app/version.sh@28 -- # version=25.1rc0 00:05:46.639 11:29:12 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:46.639 11:29:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:46.639 11:29:12 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:46.639 11:29:12 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:46.639 00:05:46.639 real 0m0.283s 00:05:46.639 user 0m0.172s 00:05:46.639 sys 0m0.161s 00:05:46.639 11:29:12 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:46.640 11:29:12 version -- common/autotest_common.sh@10 -- # set +x 00:05:46.640 ************************************ 00:05:46.640 END TEST version 00:05:46.640 ************************************ 00:05:46.640 11:29:12 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:46.640 11:29:12 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:46.640 11:29:12 -- spdk/autotest.sh@194 -- # uname -s 00:05:46.640 11:29:12 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:46.640 11:29:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.640 11:29:12 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:46.640 11:29:12 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:46.640 11:29:12 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:46.640 11:29:12 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:46.640 11:29:12 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:46.640 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:46.901 11:29:12 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:46.901 11:29:12 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:46.901 11:29:12 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:46.901 11:29:12 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:46.901 11:29:12 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:46.901 11:29:12 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:46.901 11:29:12 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:46.901 11:29:12 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:46.901 11:29:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.901 11:29:12 -- common/autotest_common.sh@10 -- # set +x 00:05:46.901 ************************************ 00:05:46.901 START TEST nvmf_tcp 00:05:46.901 ************************************ 00:05:46.901 11:29:12 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:46.901 * Looking for test storage... 00:05:46.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:46.901 11:29:12 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.901 11:29:12 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.901 11:29:12 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.901 11:29:12 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:46.901 11:29:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.902 11:29:12 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.902 --rc genhtml_branch_coverage=1 00:05:46.902 --rc genhtml_function_coverage=1 00:05:46.902 --rc genhtml_legend=1 00:05:46.902 --rc geninfo_all_blocks=1 00:05:46.902 --rc geninfo_unexecuted_blocks=1 00:05:46.902 00:05:46.902 ' 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.902 --rc genhtml_branch_coverage=1 00:05:46.902 --rc genhtml_function_coverage=1 00:05:46.902 --rc genhtml_legend=1 00:05:46.902 --rc geninfo_all_blocks=1 00:05:46.902 --rc geninfo_unexecuted_blocks=1 00:05:46.902 00:05:46.902 ' 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.902 --rc genhtml_branch_coverage=1 00:05:46.902 --rc genhtml_function_coverage=1 00:05:46.902 --rc genhtml_legend=1 00:05:46.902 --rc geninfo_all_blocks=1 00:05:46.902 --rc geninfo_unexecuted_blocks=1 00:05:46.902 00:05:46.902 ' 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.902 --rc genhtml_branch_coverage=1 00:05:46.902 --rc genhtml_function_coverage=1 00:05:46.902 --rc genhtml_legend=1 00:05:46.902 --rc geninfo_all_blocks=1 00:05:46.902 --rc geninfo_unexecuted_blocks=1 00:05:46.902 00:05:46.902 ' 00:05:46.902 11:29:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:46.902 11:29:12 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:46.902 11:29:12 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.902 11:29:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.164 ************************************ 00:05:47.164 START TEST nvmf_target_core 00:05:47.164 ************************************ 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:47.164 * Looking for test storage... 00:05:47.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.164 --rc genhtml_branch_coverage=1 00:05:47.164 --rc genhtml_function_coverage=1 00:05:47.164 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.164 --rc genhtml_branch_coverage=1 00:05:47.164 --rc genhtml_function_coverage=1 00:05:47.164 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.164 --rc genhtml_branch_coverage=1 00:05:47.164 --rc genhtml_function_coverage=1 00:05:47.164 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.164 --rc genhtml_branch_coverage=1 00:05:47.164 --rc genhtml_function_coverage=1 00:05:47.164 --rc genhtml_legend=1 00:05:47.164 --rc geninfo_all_blocks=1 00:05:47.164 --rc geninfo_unexecuted_blocks=1 00:05:47.164 00:05:47.164 ' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.164 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:47.165 11:29:12 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:47.426 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:47.427 ************************************ 00:05:47.427 START TEST nvmf_abort 00:05:47.427 ************************************ 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:47.427 * Looking for test storage... 00:05:47.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.427 --rc genhtml_branch_coverage=1 00:05:47.427 --rc genhtml_function_coverage=1 00:05:47.427 --rc genhtml_legend=1 00:05:47.427 --rc geninfo_all_blocks=1 00:05:47.427 --rc geninfo_unexecuted_blocks=1 00:05:47.427 00:05:47.427 ' 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.427 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:47.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:47.689 11:29:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:55.838 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:55.839 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:55.839 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:55.839 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:55.839 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:55.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:55.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.453 ms 00:05:55.839 00:05:55.839 --- 10.0.0.2 ping statistics --- 00:05:55.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.839 rtt min/avg/max/mdev = 0.453/0.453/0.453/0.000 ms 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:55.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:55.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:05:55.839 00:05:55.839 --- 10.0.0.1 ping statistics --- 00:05:55.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:55.839 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=852096 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 852096 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 852096 ']' 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.839 11:29:20 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:55.839 [2024-11-15 11:29:20.550107] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:05:55.839 [2024-11-15 11:29:20.550172] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:55.839 [2024-11-15 11:29:20.654241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:55.839 [2024-11-15 11:29:20.707718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:55.839 [2024-11-15 11:29:20.707772] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:55.839 [2024-11-15 11:29:20.707781] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:55.839 [2024-11-15 11:29:20.707788] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:55.839 [2024-11-15 11:29:20.707795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:55.839 [2024-11-15 11:29:20.709735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.839 [2024-11-15 11:29:20.710000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.839 [2024-11-15 11:29:20.710000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.101 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.101 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:56.101 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:56.101 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:56.101 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 [2024-11-15 11:29:21.428898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 Malloc0 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 Delay0 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 [2024-11-15 11:29:21.517036] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.102 11:29:21 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:56.363 [2024-11-15 11:29:21.668196] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:58.278 Initializing NVMe Controllers 00:05:58.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:58.278 controller IO queue size 128 less than required 00:05:58.278 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:58.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:58.278 Initialization complete. Launching workers. 00:05:58.278 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28625 00:05:58.278 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28690, failed to submit 62 00:05:58.278 success 28629, unsuccessful 61, failed 0 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:58.278 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:58.278 rmmod nvme_tcp 00:05:58.278 rmmod nvme_fabrics 00:05:58.278 rmmod nvme_keyring 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 852096 ']' 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 852096 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 852096 ']' 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 852096 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 852096 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 852096' 00:05:58.540 killing process with pid 852096 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 852096 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 852096 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:58.540 11:29:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:01.088 00:06:01.088 real 0m13.358s 00:06:01.088 user 0m13.727s 00:06:01.088 sys 0m6.681s 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:01.088 ************************************ 00:06:01.088 END TEST nvmf_abort 00:06:01.088 ************************************ 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:01.088 ************************************ 00:06:01.088 START TEST nvmf_ns_hotplug_stress 00:06:01.088 ************************************ 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:01.088 * Looking for test storage... 00:06:01.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:01.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.088 --rc genhtml_branch_coverage=1 00:06:01.088 --rc genhtml_function_coverage=1 00:06:01.088 --rc genhtml_legend=1 00:06:01.088 --rc geninfo_all_blocks=1 00:06:01.088 --rc geninfo_unexecuted_blocks=1 00:06:01.088 00:06:01.088 ' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:01.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.088 --rc genhtml_branch_coverage=1 00:06:01.088 --rc genhtml_function_coverage=1 00:06:01.088 --rc genhtml_legend=1 00:06:01.088 --rc geninfo_all_blocks=1 00:06:01.088 --rc geninfo_unexecuted_blocks=1 00:06:01.088 00:06:01.088 ' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:01.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.088 --rc genhtml_branch_coverage=1 00:06:01.088 --rc genhtml_function_coverage=1 00:06:01.088 --rc genhtml_legend=1 00:06:01.088 --rc geninfo_all_blocks=1 00:06:01.088 --rc geninfo_unexecuted_blocks=1 00:06:01.088 00:06:01.088 ' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:01.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.088 --rc genhtml_branch_coverage=1 00:06:01.088 --rc genhtml_function_coverage=1 00:06:01.088 --rc genhtml_legend=1 00:06:01.088 --rc geninfo_all_blocks=1 00:06:01.088 --rc geninfo_unexecuted_blocks=1 00:06:01.088 00:06:01.088 ' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:01.088 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:01.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:01.089 11:29:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:09.234 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:09.235 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:09.235 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:09.235 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:09.235 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:09.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:09.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:06:09.235 00:06:09.235 --- 10.0.0.2 ping statistics --- 00:06:09.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.235 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:09.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:09.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:06:09.235 00:06:09.235 --- 10.0.0.1 ping statistics --- 00:06:09.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:09.235 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:09.235 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=857053 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 857053 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 857053 ']' 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.236 11:29:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.236 [2024-11-15 11:29:34.054686] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:09.236 [2024-11-15 11:29:34.054757] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:09.236 [2024-11-15 11:29:34.155587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:09.236 [2024-11-15 11:29:34.207028] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:09.236 [2024-11-15 11:29:34.207079] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:09.236 [2024-11-15 11:29:34.207092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:09.236 [2024-11-15 11:29:34.207099] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:09.236 [2024-11-15 11:29:34.207106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:09.236 [2024-11-15 11:29:34.208993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.236 [2024-11-15 11:29:34.209154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.236 [2024-11-15 11:29:34.209156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:09.504 11:29:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:09.768 [2024-11-15 11:29:35.133964] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.768 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:10.029 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:10.290 [2024-11-15 11:29:35.540855] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:10.290 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:10.290 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:10.552 Malloc0 00:06:10.552 11:29:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.813 Delay0 00:06:10.813 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.074 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:11.074 NULL1 00:06:11.074 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:11.335 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=857520 00:06:11.335 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:11.335 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:11.335 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.596 11:29:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.858 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:11.858 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:11.858 true 00:06:11.858 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:11.858 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.119 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.380 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:12.380 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:12.380 true 00:06:12.380 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:12.380 11:29:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.641 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.901 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:12.901 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:12.901 true 00:06:12.901 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:12.901 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.162 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.422 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:13.422 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:13.422 true 00:06:13.684 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:13.684 11:29:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.684 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.944 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:13.944 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:14.205 true 00:06:14.205 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:14.205 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.205 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.465 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:14.465 11:29:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:14.726 true 00:06:14.726 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:14.726 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.726 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.987 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:14.987 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:15.247 true 00:06:15.247 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:15.247 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.248 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.509 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:15.509 11:29:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:15.770 true 00:06:15.770 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:15.770 11:29:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.156 Read completed with error (sct=0, sc=11) 00:06:17.156 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:17.156 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:17.156 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:17.156 true 00:06:17.156 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:17.156 11:29:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.099 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.360 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:18.360 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:18.360 true 00:06:18.360 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:18.360 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.621 11:29:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.881 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:18.881 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:18.881 true 00:06:18.881 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:18.881 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.141 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.416 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:19.416 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:19.416 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:19.416 true 00:06:19.416 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:19.416 11:29:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.362 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.623 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:20.623 11:29:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:20.623 true 00:06:20.623 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:20.623 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.884 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.144 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:21.144 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:21.144 true 00:06:21.144 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:21.144 11:29:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 11:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.528 11:29:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:22.528 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:22.789 true 00:06:22.789 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:22.789 11:29:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.732 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.732 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:23.732 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:23.993 true 00:06:23.993 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:23.993 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.254 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.254 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:24.254 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:24.547 true 00:06:24.547 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:24.547 11:29:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.825 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.825 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:24.825 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:25.114 true 00:06:25.114 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:25.114 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.429 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.429 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:25.429 11:29:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:25.712 true 00:06:25.712 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:25.712 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.712 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.995 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:25.995 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:26.255 true 00:06:26.255 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:26.255 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.255 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.516 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:26.516 11:29:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:26.776 true 00:06:26.776 11:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:26.776 11:29:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 11:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:27.976 11:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:27.976 11:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:28.236 true 00:06:28.236 11:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:28.236 11:29:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.177 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.177 11:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.177 11:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:29.177 11:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:29.438 true 00:06:29.438 11:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:29.438 11:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.699 11:29:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.699 11:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:29.699 11:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:29.959 true 00:06:29.959 11:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:29.959 11:29:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:31.347 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:31.347 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:31.607 true 00:06:31.607 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:31.607 11:29:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.551 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.551 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.551 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:32.551 11:29:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:32.811 true 00:06:32.811 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:32.811 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.811 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.072 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:33.072 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:33.334 true 00:06:33.334 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:33.334 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.334 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.596 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:33.596 11:29:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:33.857 true 00:06:33.857 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:33.857 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.119 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.119 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:34.119 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:34.380 true 00:06:34.380 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:34.380 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.642 11:29:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.642 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.642 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:34.642 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:34.906 true 00:06:34.906 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:34.906 11:30:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.850 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.850 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:35.850 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:36.110 true 00:06:36.110 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:36.110 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.372 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.372 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:36.372 11:30:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:36.632 true 00:06:36.632 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:36.632 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.895 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.158 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:37.158 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:37.158 true 00:06:37.158 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:37.158 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.419 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.680 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:37.680 11:30:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:37.680 true 00:06:37.680 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:37.680 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.941 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.202 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:38.202 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:38.202 true 00:06:38.202 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:38.202 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.463 11:30:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.724 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:38.724 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:38.724 true 00:06:38.724 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:38.724 11:30:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.109 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:40.109 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:40.370 true 00:06:40.370 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:40.370 11:30:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.312 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.312 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:41.312 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:41.573 true 00:06:41.573 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:41.573 11:30:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.833 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.833 Initializing NVMe Controllers 00:06:41.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:41.833 Controller IO queue size 128, less than required. 00:06:41.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.833 Controller IO queue size 128, less than required. 00:06:41.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:41.833 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:41.833 Initialization complete. Launching workers. 00:06:41.833 ======================================================== 00:06:41.833 Latency(us) 00:06:41.833 Device Information : IOPS MiB/s Average min max 00:06:41.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1866.05 0.91 31735.05 1214.58 1079698.52 00:06:41.833 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 13952.69 6.81 9138.73 1198.88 401413.06 00:06:41.833 ======================================================== 00:06:41.833 Total : 15818.74 7.72 11804.29 1198.88 1079698.52 00:06:41.833 00:06:41.833 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:41.833 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:42.094 true 00:06:42.094 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 857520 00:06:42.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (857520) - No such process 00:06:42.094 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 857520 00:06:42.094 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.356 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.356 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:42.356 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:42.356 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:42.356 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.356 11:30:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:42.617 null0 00:06:42.617 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.617 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.617 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:42.877 null1 00:06:42.877 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.877 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.877 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:42.877 null2 00:06:43.138 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.138 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.138 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:43.138 null3 00:06:43.138 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.138 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.138 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:43.398 null4 00:06:43.398 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.399 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.399 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:43.399 null5 00:06:43.660 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.660 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.660 11:30:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:43.660 null6 00:06:43.660 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.660 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.660 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:43.919 null7 00:06:43.919 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.919 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:43.920 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:43.921 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 864438 864440 864443 864446 864449 864452 864455 864457 00:06:43.921 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:43.921 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:43.921 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:43.921 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.181 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.443 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.704 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.704 11:30:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.704 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.965 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.226 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.488 11:30:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.750 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.010 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.274 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.535 11:30:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.795 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.796 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.056 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.316 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.576 11:30:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.576 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.576 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.576 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:47.837 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:47.837 rmmod nvme_tcp 00:06:47.837 rmmod nvme_fabrics 00:06:48.098 rmmod nvme_keyring 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 857053 ']' 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 857053 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 857053 ']' 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 857053 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 857053 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 857053' 00:06:48.098 killing process with pid 857053 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 857053 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 857053 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:48.098 11:30:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:50.650 00:06:50.650 real 0m49.471s 00:06:50.650 user 3m16.474s 00:06:50.650 sys 0m16.463s 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:50.650 ************************************ 00:06:50.650 END TEST nvmf_ns_hotplug_stress 00:06:50.650 ************************************ 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:50.650 ************************************ 00:06:50.650 START TEST nvmf_delete_subsystem 00:06:50.650 ************************************ 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:50.650 * Looking for test storage... 00:06:50.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.650 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.651 --rc genhtml_branch_coverage=1 00:06:50.651 --rc genhtml_function_coverage=1 00:06:50.651 --rc genhtml_legend=1 00:06:50.651 --rc geninfo_all_blocks=1 00:06:50.651 --rc geninfo_unexecuted_blocks=1 00:06:50.651 00:06:50.651 ' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.651 --rc genhtml_branch_coverage=1 00:06:50.651 --rc genhtml_function_coverage=1 00:06:50.651 --rc genhtml_legend=1 00:06:50.651 --rc geninfo_all_blocks=1 00:06:50.651 --rc geninfo_unexecuted_blocks=1 00:06:50.651 00:06:50.651 ' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.651 --rc genhtml_branch_coverage=1 00:06:50.651 --rc genhtml_function_coverage=1 00:06:50.651 --rc genhtml_legend=1 00:06:50.651 --rc geninfo_all_blocks=1 00:06:50.651 --rc geninfo_unexecuted_blocks=1 00:06:50.651 00:06:50.651 ' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:50.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.651 --rc genhtml_branch_coverage=1 00:06:50.651 --rc genhtml_function_coverage=1 00:06:50.651 --rc genhtml_legend=1 00:06:50.651 --rc geninfo_all_blocks=1 00:06:50.651 --rc geninfo_unexecuted_blocks=1 00:06:50.651 00:06:50.651 ' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:50.651 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:50.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:50.652 11:30:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:58.799 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:58.799 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:58.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:58.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:58.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.675 ms 00:06:58.799 00:06:58.799 --- 10.0.0.2 ping statistics --- 00:06:58.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.799 rtt min/avg/max/mdev = 0.675/0.675/0.675/0.000 ms 00:06:58.799 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:06:58.800 00:06:58.800 --- 10.0.0.1 ping statistics --- 00:06:58.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.800 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=870121 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 870121 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 870121 ']' 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:58.800 11:30:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 [2024-11-15 11:30:23.505826] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:06:58.800 [2024-11-15 11:30:23.505894] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.800 [2024-11-15 11:30:23.605334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.800 [2024-11-15 11:30:23.656246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.800 [2024-11-15 11:30:23.656294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.800 [2024-11-15 11:30:23.656302] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.800 [2024-11-15 11:30:23.656310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.800 [2024-11-15 11:30:23.656316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.800 [2024-11-15 11:30:23.658150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.800 [2024-11-15 11:30:23.658154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 [2024-11-15 11:30:24.366430] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 [2024-11-15 11:30:24.390732] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 NULL1 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 Delay0 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=870162 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:59.061 11:30:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.061 [2024-11-15 11:30:24.517740] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:00.976 11:30:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:00.976 11:30:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:00.976 11:30:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 starting I/O failed: -6 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Write completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 starting I/O failed: -6 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 starting I/O failed: -6 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Write completed with error (sct=0, sc=8) 00:07:01.236 Write completed with error (sct=0, sc=8) 00:07:01.236 starting I/O failed: -6 00:07:01.236 Write completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 Write completed with error (sct=0, sc=8) 00:07:01.236 Read completed with error (sct=0, sc=8) 00:07:01.236 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 [2024-11-15 11:30:26.648192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c2c0 is same with the state(6) to be set 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 starting I/O failed: -6 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 [2024-11-15 11:30:26.649712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d30000c40 is same with the state(6) to be set 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Read completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:01.237 Write completed with error (sct=0, sc=8) 00:07:02.178 [2024-11-15 11:30:27.618148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6d9a0 is same with the state(6) to be set 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 [2024-11-15 11:30:27.651880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c4a0 is same with the state(6) to be set 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Write completed with error (sct=0, sc=8) 00:07:02.178 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 [2024-11-15 11:30:27.652609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d3000d020 is same with the state(6) to be set 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 [2024-11-15 11:30:27.652805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9d3000d7c0 is same with the state(6) to be set 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Read completed with error (sct=0, sc=8) 00:07:02.179 Write completed with error (sct=0, sc=8) 00:07:02.179 [2024-11-15 11:30:27.653169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf6c860 is same with the state(6) to be set 00:07:02.179 Initializing NVMe Controllers 00:07:02.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.179 Controller IO queue size 128, less than required. 00:07:02.179 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.179 Initialization complete. Launching workers. 00:07:02.179 ======================================================== 00:07:02.179 Latency(us) 00:07:02.179 Device Information : IOPS MiB/s Average min max 00:07:02.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.87 0.08 892393.50 411.15 1011718.92 00:07:02.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.92 0.08 926659.95 334.37 2002357.18 00:07:02.179 ======================================================== 00:07:02.179 Total : 335.80 0.16 909121.21 334.37 2002357.18 00:07:02.179 00:07:02.179 [2024-11-15 11:30:27.653478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf6d9a0 (9): Bad file descriptor 00:07:02.179 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:02.179 11:30:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.179 11:30:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:02.179 11:30:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 870162 00:07:02.179 11:30:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 870162 00:07:02.751 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (870162) - No such process 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 870162 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 870162 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 870162 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.751 [2024-11-15 11:30:28.184766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=870969 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:02.751 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.012 [2024-11-15 11:30:28.290508] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:03.272 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.272 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:03.272 11:30:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:03.842 11:30:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:03.842 11:30:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:03.842 11:30:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.412 11:30:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.412 11:30:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:04.412 11:30:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:04.985 11:30:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:04.985 11:30:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:04.985 11:30:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.245 11:30:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.245 11:30:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:05.245 11:30:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.817 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.817 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:05.817 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.078 Initializing NVMe Controllers 00:07:06.078 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:06.078 Controller IO queue size 128, less than required. 00:07:06.078 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:06.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:06.078 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:06.078 Initialization complete. Launching workers. 00:07:06.078 ======================================================== 00:07:06.078 Latency(us) 00:07:06.078 Device Information : IOPS MiB/s Average min max 00:07:06.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002241.52 1000133.22 1007147.61 00:07:06.078 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002953.19 1000221.33 1008746.77 00:07:06.078 ======================================================== 00:07:06.078 Total : 256.00 0.12 1002597.35 1000133.22 1008746.77 00:07:06.078 00:07:06.078 [2024-11-15 11:30:31.422211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20994c0 is same with the state(6) to be set 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 870969 00:07:06.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (870969) - No such process 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 870969 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:06.339 rmmod nvme_tcp 00:07:06.339 rmmod nvme_fabrics 00:07:06.339 rmmod nvme_keyring 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 870121 ']' 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 870121 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 870121 ']' 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 870121 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:06.339 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 870121 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 870121' 00:07:06.600 killing process with pid 870121 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 870121 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 870121 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:06.600 11:30:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.147 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:09.147 00:07:09.147 real 0m18.362s 00:07:09.147 user 0m30.808s 00:07:09.148 sys 0m6.774s 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:09.148 ************************************ 00:07:09.148 END TEST nvmf_delete_subsystem 00:07:09.148 ************************************ 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:09.148 ************************************ 00:07:09.148 START TEST nvmf_host_management 00:07:09.148 ************************************ 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:09.148 * Looking for test storage... 00:07:09.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:09.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.148 --rc genhtml_branch_coverage=1 00:07:09.148 --rc genhtml_function_coverage=1 00:07:09.148 --rc genhtml_legend=1 00:07:09.148 --rc geninfo_all_blocks=1 00:07:09.148 --rc geninfo_unexecuted_blocks=1 00:07:09.148 00:07:09.148 ' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:09.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.148 --rc genhtml_branch_coverage=1 00:07:09.148 --rc genhtml_function_coverage=1 00:07:09.148 --rc genhtml_legend=1 00:07:09.148 --rc geninfo_all_blocks=1 00:07:09.148 --rc geninfo_unexecuted_blocks=1 00:07:09.148 00:07:09.148 ' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:09.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.148 --rc genhtml_branch_coverage=1 00:07:09.148 --rc genhtml_function_coverage=1 00:07:09.148 --rc genhtml_legend=1 00:07:09.148 --rc geninfo_all_blocks=1 00:07:09.148 --rc geninfo_unexecuted_blocks=1 00:07:09.148 00:07:09.148 ' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:09.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.148 --rc genhtml_branch_coverage=1 00:07:09.148 --rc genhtml_function_coverage=1 00:07:09.148 --rc genhtml_legend=1 00:07:09.148 --rc geninfo_all_blocks=1 00:07:09.148 --rc geninfo_unexecuted_blocks=1 00:07:09.148 00:07:09.148 ' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.148 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:09.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:09.149 11:30:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:17.295 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:17.295 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:17.295 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.295 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:17.296 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.296 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.296 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.529 ms 00:07:17.296 00:07:17.296 --- 10.0.0.2 ping statistics --- 00:07:17.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.296 rtt min/avg/max/mdev = 0.529/0.529/0.529/0.000 ms 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:07:17.296 00:07:17.296 --- 10.0.0.1 ping statistics --- 00:07:17.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.296 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=875949 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 875949 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 875949 ']' 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.296 11:30:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.296 [2024-11-15 11:30:41.966723] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:17.296 [2024-11-15 11:30:41.966789] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.296 [2024-11-15 11:30:42.067804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.296 [2024-11-15 11:30:42.121347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.296 [2024-11-15 11:30:42.121396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.296 [2024-11-15 11:30:42.121405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.296 [2024-11-15 11:30:42.121412] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.296 [2024-11-15 11:30:42.121419] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.296 [2024-11-15 11:30:42.123869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.296 [2024-11-15 11:30:42.124033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.296 [2024-11-15 11:30:42.124199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.296 [2024-11-15 11:30:42.124199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.296 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.296 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:17.296 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.296 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.296 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.558 [2024-11-15 11:30:42.838860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.558 Malloc0 00:07:17.558 [2024-11-15 11:30:42.917830] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=876240 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 876240 /var/tmp/bdevperf.sock 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 876240 ']' 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:17.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.558 { 00:07:17.558 "params": { 00:07:17.558 "name": "Nvme$subsystem", 00:07:17.558 "trtype": "$TEST_TRANSPORT", 00:07:17.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.558 "adrfam": "ipv4", 00:07:17.558 "trsvcid": "$NVMF_PORT", 00:07:17.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.558 "hdgst": ${hdgst:-false}, 00:07:17.558 "ddgst": ${ddgst:-false} 00:07:17.558 }, 00:07:17.558 "method": "bdev_nvme_attach_controller" 00:07:17.558 } 00:07:17.558 EOF 00:07:17.558 )") 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.558 11:30:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.558 "params": { 00:07:17.558 "name": "Nvme0", 00:07:17.558 "trtype": "tcp", 00:07:17.558 "traddr": "10.0.0.2", 00:07:17.558 "adrfam": "ipv4", 00:07:17.558 "trsvcid": "4420", 00:07:17.558 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.558 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.558 "hdgst": false, 00:07:17.558 "ddgst": false 00:07:17.558 }, 00:07:17.558 "method": "bdev_nvme_attach_controller" 00:07:17.558 }' 00:07:17.558 [2024-11-15 11:30:43.026792] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:17.558 [2024-11-15 11:30:43.026861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876240 ] 00:07:17.819 [2024-11-15 11:30:43.120027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.819 [2024-11-15 11:30:43.173488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.080 Running I/O for 10 seconds... 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:18.080 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=529 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 529 -ge 100 ']' 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.342 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.342 [2024-11-15 11:30:43.772999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.342 [2024-11-15 11:30:43.773069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.773135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e3150 is same with the state(6) to be set 00:07:18.343 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.343 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:18.343 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.343 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.343 [2024-11-15 11:30:43.788142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.343 [2024-11-15 11:30:43.788198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.788209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.343 [2024-11-15 11:30:43.788217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.788226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.343 [2024-11-15 11:30:43.788235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.788243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:18.343 [2024-11-15 11:30:43.788251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.788258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4b000 is same with the state(6) to be set 00:07:18.343 [2024-11-15 11:30:43.789192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.343 [2024-11-15 11:30:43.789711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.343 [2024-11-15 11:30:43.789719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.789987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.789997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:1 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.344 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 [2024-11-15 11:30:43.790339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.344 [2024-11-15 11:30:43.790346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.344 11:30:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:18.344 [2024-11-15 11:30:43.791697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:18.344 task offset: 81920 on job bdev=Nvme0n1 fails 00:07:18.344 00:07:18.344 Latency(us) 00:07:18.344 [2024-11-15T10:30:43.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.344 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.344 Job: Nvme0n1 ended in about 0.45 seconds with error 00:07:18.344 Verification LBA range: start 0x0 length 0x400 00:07:18.344 Nvme0n1 : 0.45 1435.93 89.75 143.59 0.00 39339.55 1727.15 37573.97 00:07:18.344 [2024-11-15T10:30:43.842Z] =================================================================================================================== 00:07:18.344 [2024-11-15T10:30:43.843Z] Total : 1435.93 89.75 143.59 0.00 39339.55 1727.15 37573.97 00:07:18.345 [2024-11-15 11:30:43.794004] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.345 [2024-11-15 11:30:43.794039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf4b000 (9): Bad file descriptor 00:07:18.345 [2024-11-15 11:30:43.805111] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 876240 00:07:19.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (876240) - No such process 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:19.729 { 00:07:19.729 "params": { 00:07:19.729 "name": "Nvme$subsystem", 00:07:19.729 "trtype": "$TEST_TRANSPORT", 00:07:19.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:19.729 "adrfam": "ipv4", 00:07:19.729 "trsvcid": "$NVMF_PORT", 00:07:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:19.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:19.729 "hdgst": ${hdgst:-false}, 00:07:19.729 "ddgst": ${ddgst:-false} 00:07:19.729 }, 00:07:19.729 "method": "bdev_nvme_attach_controller" 00:07:19.729 } 00:07:19.729 EOF 00:07:19.729 )") 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:19.729 11:30:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:19.729 "params": { 00:07:19.729 "name": "Nvme0", 00:07:19.729 "trtype": "tcp", 00:07:19.729 "traddr": "10.0.0.2", 00:07:19.729 "adrfam": "ipv4", 00:07:19.729 "trsvcid": "4420", 00:07:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:19.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:19.729 "hdgst": false, 00:07:19.729 "ddgst": false 00:07:19.729 }, 00:07:19.729 "method": "bdev_nvme_attach_controller" 00:07:19.729 }' 00:07:19.729 [2024-11-15 11:30:44.854227] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:19.729 [2024-11-15 11:30:44.854303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid876593 ] 00:07:19.729 [2024-11-15 11:30:44.947302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.729 [2024-11-15 11:30:44.998242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.990 Running I/O for 1 seconds... 00:07:20.930 1664.00 IOPS, 104.00 MiB/s 00:07:20.930 Latency(us) 00:07:20.930 [2024-11-15T10:30:46.428Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.930 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:20.930 Verification LBA range: start 0x0 length 0x400 00:07:20.930 Nvme0n1 : 1.03 1684.16 105.26 0.00 0.00 37298.66 3222.19 32549.55 00:07:20.930 [2024-11-15T10:30:46.428Z] =================================================================================================================== 00:07:20.930 [2024-11-15T10:30:46.428Z] Total : 1684.16 105.26 0.00 0.00 37298.66 3222.19 32549.55 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:21.191 rmmod nvme_tcp 00:07:21.191 rmmod nvme_fabrics 00:07:21.191 rmmod nvme_keyring 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 875949 ']' 00:07:21.191 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 875949 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 875949 ']' 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 875949 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 875949 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 875949' 00:07:21.192 killing process with pid 875949 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 875949 00:07:21.192 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 875949 00:07:21.192 [2024-11-15 11:30:46.677844] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.452 11:30:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:23.364 00:07:23.364 real 0m14.647s 00:07:23.364 user 0m22.944s 00:07:23.364 sys 0m6.784s 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:23.364 ************************************ 00:07:23.364 END TEST nvmf_host_management 00:07:23.364 ************************************ 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:23.364 11:30:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:23.626 ************************************ 00:07:23.626 START TEST nvmf_lvol 00:07:23.626 ************************************ 00:07:23.626 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:23.626 * Looking for test storage... 00:07:23.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.626 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:23.626 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:07:23.626 11:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:23.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.626 --rc genhtml_branch_coverage=1 00:07:23.626 --rc genhtml_function_coverage=1 00:07:23.626 --rc genhtml_legend=1 00:07:23.626 --rc geninfo_all_blocks=1 00:07:23.626 --rc geninfo_unexecuted_blocks=1 00:07:23.626 00:07:23.626 ' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:23.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.626 --rc genhtml_branch_coverage=1 00:07:23.626 --rc genhtml_function_coverage=1 00:07:23.626 --rc genhtml_legend=1 00:07:23.626 --rc geninfo_all_blocks=1 00:07:23.626 --rc geninfo_unexecuted_blocks=1 00:07:23.626 00:07:23.626 ' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:23.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.626 --rc genhtml_branch_coverage=1 00:07:23.626 --rc genhtml_function_coverage=1 00:07:23.626 --rc genhtml_legend=1 00:07:23.626 --rc geninfo_all_blocks=1 00:07:23.626 --rc geninfo_unexecuted_blocks=1 00:07:23.626 00:07:23.626 ' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:23.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:23.626 --rc genhtml_branch_coverage=1 00:07:23.626 --rc genhtml_function_coverage=1 00:07:23.626 --rc genhtml_legend=1 00:07:23.626 --rc geninfo_all_blocks=1 00:07:23.626 --rc geninfo_unexecuted_blocks=1 00:07:23.626 00:07:23.626 ' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.626 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:23.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:23.627 11:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:31.762 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:31.763 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:31.763 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:31.763 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:31.763 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:31.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.702 ms 00:07:31.763 00:07:31.763 --- 10.0.0.2 ping statistics --- 00:07:31.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.763 rtt min/avg/max/mdev = 0.702/0.702/0.702/0.000 ms 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:07:31.763 00:07:31.763 --- 10.0.0.1 ping statistics --- 00:07:31.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.763 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=881274 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 881274 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 881274 ']' 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:31.763 11:30:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:31.763 [2024-11-15 11:30:56.648336] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:31.763 [2024-11-15 11:30:56.648401] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.763 [2024-11-15 11:30:56.748137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.764 [2024-11-15 11:30:56.800395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:31.764 [2024-11-15 11:30:56.800445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:31.764 [2024-11-15 11:30:56.800454] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:31.764 [2024-11-15 11:30:56.800462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:31.764 [2024-11-15 11:30:56.800468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:31.764 [2024-11-15 11:30:56.802607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.764 [2024-11-15 11:30:56.802716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.764 [2024-11-15 11:30:56.802858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:32.024 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:32.286 [2024-11-15 11:30:57.684032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:32.286 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.548 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:32.548 11:30:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:32.808 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:32.808 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:33.068 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:33.329 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8518b618-78e9-4a23-86d5-0afd48666660 00:07:33.329 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8518b618-78e9-4a23-86d5-0afd48666660 lvol 20 00:07:33.329 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b9410158-7e75-4796-8059-e87b6103ec4f 00:07:33.329 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:33.591 11:30:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b9410158-7e75-4796-8059-e87b6103ec4f 00:07:33.851 11:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:33.851 [2024-11-15 11:30:59.334528] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.140 11:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.140 11:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=881867 00:07:34.140 11:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:34.140 11:30:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:35.082 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot b9410158-7e75-4796-8059-e87b6103ec4f MY_SNAPSHOT 00:07:35.343 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5a1ee556-ad05-4fc0-9194-d48b1f89f908 00:07:35.343 11:31:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize b9410158-7e75-4796-8059-e87b6103ec4f 30 00:07:35.603 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5a1ee556-ad05-4fc0-9194-d48b1f89f908 MY_CLONE 00:07:35.863 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=314bf83b-ca59-4b16-a23a-31efa7045c14 00:07:35.863 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 314bf83b-ca59-4b16-a23a-31efa7045c14 00:07:36.123 11:31:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 881867 00:07:46.142 Initializing NVMe Controllers 00:07:46.142 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:46.142 Controller IO queue size 128, less than required. 00:07:46.142 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:46.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:46.142 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:46.142 Initialization complete. Launching workers. 00:07:46.142 ======================================================== 00:07:46.142 Latency(us) 00:07:46.142 Device Information : IOPS MiB/s Average min max 00:07:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15851.68 61.92 8076.15 1609.43 65422.97 00:07:46.142 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17045.13 66.58 7509.65 373.33 46662.08 00:07:46.142 ======================================================== 00:07:46.142 Total : 32896.81 128.50 7782.62 373.33 65422.97 00:07:46.142 00:07:46.142 11:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b9410158-7e75-4796-8059-e87b6103ec4f 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8518b618-78e9-4a23-86d5-0afd48666660 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:46.142 rmmod nvme_tcp 00:07:46.142 rmmod nvme_fabrics 00:07:46.142 rmmod nvme_keyring 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 881274 ']' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 881274 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 881274 ']' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 881274 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 881274 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 881274' 00:07:46.142 killing process with pid 881274 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 881274 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 881274 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.142 11:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.234 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.234 00:07:47.234 real 0m23.854s 00:07:47.234 user 1m4.710s 00:07:47.234 sys 0m8.533s 00:07:47.234 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:47.234 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:47.234 ************************************ 00:07:47.234 END TEST nvmf_lvol 00:07:47.234 ************************************ 00:07:47.495 11:31:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:47.495 11:31:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:47.495 11:31:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:47.495 11:31:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.495 ************************************ 00:07:47.496 START TEST nvmf_lvs_grow 00:07:47.496 ************************************ 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:47.496 * Looking for test storage... 00:07:47.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.496 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:47.757 11:31:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.757 --rc genhtml_branch_coverage=1 00:07:47.757 --rc genhtml_function_coverage=1 00:07:47.757 --rc genhtml_legend=1 00:07:47.757 --rc geninfo_all_blocks=1 00:07:47.757 --rc geninfo_unexecuted_blocks=1 00:07:47.757 00:07:47.757 ' 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.757 --rc genhtml_branch_coverage=1 00:07:47.757 --rc genhtml_function_coverage=1 00:07:47.757 --rc genhtml_legend=1 00:07:47.757 --rc geninfo_all_blocks=1 00:07:47.757 --rc geninfo_unexecuted_blocks=1 00:07:47.757 00:07:47.757 ' 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.757 --rc genhtml_branch_coverage=1 00:07:47.757 --rc genhtml_function_coverage=1 00:07:47.757 --rc genhtml_legend=1 00:07:47.757 --rc geninfo_all_blocks=1 00:07:47.757 --rc geninfo_unexecuted_blocks=1 00:07:47.757 00:07:47.757 ' 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:47.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.757 --rc genhtml_branch_coverage=1 00:07:47.757 --rc genhtml_function_coverage=1 00:07:47.757 --rc genhtml_legend=1 00:07:47.757 --rc geninfo_all_blocks=1 00:07:47.757 --rc geninfo_unexecuted_blocks=1 00:07:47.757 00:07:47.757 ' 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.757 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.758 11:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:55.898 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:55.898 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:55.898 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.898 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:55.899 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:55.899 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:55.899 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:07:55.899 00:07:55.899 --- 10.0.0.2 ping statistics --- 00:07:55.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.899 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:55.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:55.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:07:55.899 00:07:55.899 --- 10.0.0.1 ping statistics --- 00:07:55.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:55.899 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=888350 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 888350 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 888350 ']' 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.899 11:31:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 [2024-11-15 11:31:20.519004] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:55.899 [2024-11-15 11:31:20.519066] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.899 [2024-11-15 11:31:20.619126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.899 [2024-11-15 11:31:20.669761] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.899 [2024-11-15 11:31:20.669810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.899 [2024-11-15 11:31:20.669819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.899 [2024-11-15 11:31:20.669826] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.899 [2024-11-15 11:31:20.669832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.899 [2024-11-15 11:31:20.670630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.899 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:56.160 [2024-11-15 11:31:21.538298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:56.161 ************************************ 00:07:56.161 START TEST lvs_grow_clean 00:07:56.161 ************************************ 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:56.161 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:56.422 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:56.422 11:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:56.682 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=18daee01-80dc-45f5-9cb6-67de6a24371d 00:07:56.682 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:07:56.682 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:56.943 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:56.943 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:56.943 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 18daee01-80dc-45f5-9cb6-67de6a24371d lvol 150 00:07:57.203 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0ade14a4-e545-43b0-9e10-45d6d169b11a 00:07:57.203 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:57.203 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:57.203 [2024-11-15 11:31:22.621305] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:57.203 [2024-11-15 11:31:22.621376] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:57.203 true 00:07:57.203 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:07:57.203 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:57.464 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:57.464 11:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:57.725 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0ade14a4-e545-43b0-9e10-45d6d169b11a 00:07:57.725 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:57.986 [2024-11-15 11:31:23.347653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.986 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=888893 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 888893 /var/tmp/bdevperf.sock 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 888893 ']' 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:58.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:58.248 11:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:58.248 [2024-11-15 11:31:23.598850] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:07:58.248 [2024-11-15 11:31:23.598918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid888893 ] 00:07:58.248 [2024-11-15 11:31:23.692836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.509 [2024-11-15 11:31:23.746827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.080 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:59.080 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:59.080 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:59.342 Nvme0n1 00:07:59.342 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:59.603 [ 00:07:59.603 { 00:07:59.603 "name": "Nvme0n1", 00:07:59.603 "aliases": [ 00:07:59.603 "0ade14a4-e545-43b0-9e10-45d6d169b11a" 00:07:59.603 ], 00:07:59.603 "product_name": "NVMe disk", 00:07:59.603 "block_size": 4096, 00:07:59.603 "num_blocks": 38912, 00:07:59.603 "uuid": "0ade14a4-e545-43b0-9e10-45d6d169b11a", 00:07:59.603 "numa_id": 0, 00:07:59.603 "assigned_rate_limits": { 00:07:59.603 "rw_ios_per_sec": 0, 00:07:59.603 "rw_mbytes_per_sec": 0, 00:07:59.603 "r_mbytes_per_sec": 0, 00:07:59.603 "w_mbytes_per_sec": 0 00:07:59.603 }, 00:07:59.603 "claimed": false, 00:07:59.603 "zoned": false, 00:07:59.603 "supported_io_types": { 00:07:59.603 "read": true, 00:07:59.603 "write": true, 00:07:59.603 "unmap": true, 00:07:59.603 "flush": true, 00:07:59.603 "reset": true, 00:07:59.603 "nvme_admin": true, 00:07:59.603 "nvme_io": true, 00:07:59.603 "nvme_io_md": false, 00:07:59.603 "write_zeroes": true, 00:07:59.603 "zcopy": false, 00:07:59.603 "get_zone_info": false, 00:07:59.603 "zone_management": false, 00:07:59.603 "zone_append": false, 00:07:59.603 "compare": true, 00:07:59.603 "compare_and_write": true, 00:07:59.603 "abort": true, 00:07:59.603 "seek_hole": false, 00:07:59.603 "seek_data": false, 00:07:59.603 "copy": true, 00:07:59.603 "nvme_iov_md": false 00:07:59.603 }, 00:07:59.603 "memory_domains": [ 00:07:59.603 { 00:07:59.603 "dma_device_id": "system", 00:07:59.603 "dma_device_type": 1 00:07:59.603 } 00:07:59.603 ], 00:07:59.603 "driver_specific": { 00:07:59.603 "nvme": [ 00:07:59.603 { 00:07:59.603 "trid": { 00:07:59.603 "trtype": "TCP", 00:07:59.603 "adrfam": "IPv4", 00:07:59.603 "traddr": "10.0.0.2", 00:07:59.603 "trsvcid": "4420", 00:07:59.603 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:59.603 }, 00:07:59.603 "ctrlr_data": { 00:07:59.603 "cntlid": 1, 00:07:59.603 "vendor_id": "0x8086", 00:07:59.603 "model_number": "SPDK bdev Controller", 00:07:59.603 "serial_number": "SPDK0", 00:07:59.603 "firmware_revision": "25.01", 00:07:59.603 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.603 "oacs": { 00:07:59.603 "security": 0, 00:07:59.603 "format": 0, 00:07:59.603 "firmware": 0, 00:07:59.603 "ns_manage": 0 00:07:59.603 }, 00:07:59.603 "multi_ctrlr": true, 00:07:59.603 "ana_reporting": false 00:07:59.603 }, 00:07:59.603 "vs": { 00:07:59.603 "nvme_version": "1.3" 00:07:59.603 }, 00:07:59.603 "ns_data": { 00:07:59.603 "id": 1, 00:07:59.603 "can_share": true 00:07:59.603 } 00:07:59.603 } 00:07:59.603 ], 00:07:59.603 "mp_policy": "active_passive" 00:07:59.603 } 00:07:59.603 } 00:07:59.603 ] 00:07:59.603 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=889101 00:07:59.603 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:59.603 11:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:59.603 Running I/O for 10 seconds... 00:08:00.544 Latency(us) 00:08:00.544 [2024-11-15T10:31:26.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:00.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.544 Nvme0n1 : 1.00 24730.00 96.60 0.00 0.00 0.00 0.00 0.00 00:08:00.544 [2024-11-15T10:31:26.042Z] =================================================================================================================== 00:08:00.544 [2024-11-15T10:31:26.042Z] Total : 24730.00 96.60 0.00 0.00 0.00 0.00 0.00 00:08:00.544 00:08:01.484 11:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:01.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.744 Nvme0n1 : 2.00 24941.00 97.43 0.00 0.00 0.00 0.00 0.00 00:08:01.744 [2024-11-15T10:31:27.242Z] =================================================================================================================== 00:08:01.744 [2024-11-15T10:31:27.242Z] Total : 24941.00 97.43 0.00 0.00 0.00 0.00 0.00 00:08:01.744 00:08:01.744 true 00:08:01.744 11:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:01.744 11:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:02.005 11:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:02.005 11:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:02.005 11:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 889101 00:08:02.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.575 Nvme0n1 : 3.00 25038.33 97.81 0.00 0.00 0.00 0.00 0.00 00:08:02.575 [2024-11-15T10:31:28.073Z] =================================================================================================================== 00:08:02.575 [2024-11-15T10:31:28.073Z] Total : 25038.33 97.81 0.00 0.00 0.00 0.00 0.00 00:08:02.575 00:08:03.960 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.960 Nvme0n1 : 4.00 25098.75 98.04 0.00 0.00 0.00 0.00 0.00 00:08:03.960 [2024-11-15T10:31:29.458Z] =================================================================================================================== 00:08:03.960 [2024-11-15T10:31:29.458Z] Total : 25098.75 98.04 0.00 0.00 0.00 0.00 0.00 00:08:03.960 00:08:04.902 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.902 Nvme0n1 : 5.00 25134.80 98.18 0.00 0.00 0.00 0.00 0.00 00:08:04.902 [2024-11-15T10:31:30.400Z] =================================================================================================================== 00:08:04.902 [2024-11-15T10:31:30.400Z] Total : 25134.80 98.18 0.00 0.00 0.00 0.00 0.00 00:08:04.902 00:08:05.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.845 Nvme0n1 : 6.00 25169.50 98.32 0.00 0.00 0.00 0.00 0.00 00:08:05.845 [2024-11-15T10:31:31.343Z] =================================================================================================================== 00:08:05.845 [2024-11-15T10:31:31.343Z] Total : 25169.50 98.32 0.00 0.00 0.00 0.00 0.00 00:08:05.845 00:08:06.787 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.787 Nvme0n1 : 7.00 25194.57 98.42 0.00 0.00 0.00 0.00 0.00 00:08:06.787 [2024-11-15T10:31:32.285Z] =================================================================================================================== 00:08:06.787 [2024-11-15T10:31:32.285Z] Total : 25194.57 98.42 0.00 0.00 0.00 0.00 0.00 00:08:06.787 00:08:07.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.730 Nvme0n1 : 8.00 25213.25 98.49 0.00 0.00 0.00 0.00 0.00 00:08:07.730 [2024-11-15T10:31:33.228Z] =================================================================================================================== 00:08:07.730 [2024-11-15T10:31:33.228Z] Total : 25213.25 98.49 0.00 0.00 0.00 0.00 0.00 00:08:07.730 00:08:08.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:08.672 Nvme0n1 : 9.00 25227.33 98.54 0.00 0.00 0.00 0.00 0.00 00:08:08.672 [2024-11-15T10:31:34.170Z] =================================================================================================================== 00:08:08.672 [2024-11-15T10:31:34.170Z] Total : 25227.33 98.54 0.00 0.00 0.00 0.00 0.00 00:08:08.672 00:08:09.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.613 Nvme0n1 : 10.00 25245.30 98.61 0.00 0.00 0.00 0.00 0.00 00:08:09.613 [2024-11-15T10:31:35.111Z] =================================================================================================================== 00:08:09.613 [2024-11-15T10:31:35.111Z] Total : 25245.30 98.61 0.00 0.00 0.00 0.00 0.00 00:08:09.613 00:08:09.613 00:08:09.613 Latency(us) 00:08:09.613 [2024-11-15T10:31:35.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:09.613 Nvme0n1 : 10.01 25245.08 98.61 0.00 0.00 5066.45 2525.87 14090.24 00:08:09.613 [2024-11-15T10:31:35.111Z] =================================================================================================================== 00:08:09.613 [2024-11-15T10:31:35.111Z] Total : 25245.08 98.61 0.00 0.00 5066.45 2525.87 14090.24 00:08:09.613 { 00:08:09.613 "results": [ 00:08:09.613 { 00:08:09.613 "job": "Nvme0n1", 00:08:09.613 "core_mask": "0x2", 00:08:09.613 "workload": "randwrite", 00:08:09.613 "status": "finished", 00:08:09.613 "queue_depth": 128, 00:08:09.613 "io_size": 4096, 00:08:09.613 "runtime": 10.005157, 00:08:09.613 "iops": 25245.081111670712, 00:08:09.613 "mibps": 98.61359809246372, 00:08:09.613 "io_failed": 0, 00:08:09.613 "io_timeout": 0, 00:08:09.613 "avg_latency_us": 5066.453636549595, 00:08:09.613 "min_latency_us": 2525.866666666667, 00:08:09.613 "max_latency_us": 14090.24 00:08:09.613 } 00:08:09.613 ], 00:08:09.613 "core_count": 1 00:08:09.613 } 00:08:09.613 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 888893 00:08:09.613 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 888893 ']' 00:08:09.613 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 888893 00:08:09.613 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:08:09.613 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:09.613 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 888893 00:08:09.875 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:09.875 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:09.875 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 888893' 00:08:09.875 killing process with pid 888893 00:08:09.875 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 888893 00:08:09.875 Received shutdown signal, test time was about 10.000000 seconds 00:08:09.875 00:08:09.875 Latency(us) 00:08:09.875 [2024-11-15T10:31:35.373Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:09.875 [2024-11-15T10:31:35.373Z] =================================================================================================================== 00:08:09.875 [2024-11-15T10:31:35.373Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:09.875 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 888893 00:08:09.875 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.136 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:10.136 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:10.136 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:10.397 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:10.397 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:10.397 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.658 [2024-11-15 11:31:35.925086] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:10.658 11:31:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:10.658 request: 00:08:10.658 { 00:08:10.658 "uuid": "18daee01-80dc-45f5-9cb6-67de6a24371d", 00:08:10.658 "method": "bdev_lvol_get_lvstores", 00:08:10.658 "req_id": 1 00:08:10.658 } 00:08:10.658 Got JSON-RPC error response 00:08:10.658 response: 00:08:10.658 { 00:08:10.658 "code": -19, 00:08:10.658 "message": "No such device" 00:08:10.658 } 00:08:10.658 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:10.658 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:10.658 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:10.658 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:10.658 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:10.919 aio_bdev 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0ade14a4-e545-43b0-9e10-45d6d169b11a 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=0ade14a4-e545-43b0-9e10-45d6d169b11a 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:10.919 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.179 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0ade14a4-e545-43b0-9e10-45d6d169b11a -t 2000 00:08:11.179 [ 00:08:11.179 { 00:08:11.179 "name": "0ade14a4-e545-43b0-9e10-45d6d169b11a", 00:08:11.179 "aliases": [ 00:08:11.179 "lvs/lvol" 00:08:11.179 ], 00:08:11.179 "product_name": "Logical Volume", 00:08:11.179 "block_size": 4096, 00:08:11.179 "num_blocks": 38912, 00:08:11.179 "uuid": "0ade14a4-e545-43b0-9e10-45d6d169b11a", 00:08:11.179 "assigned_rate_limits": { 00:08:11.179 "rw_ios_per_sec": 0, 00:08:11.179 "rw_mbytes_per_sec": 0, 00:08:11.179 "r_mbytes_per_sec": 0, 00:08:11.179 "w_mbytes_per_sec": 0 00:08:11.179 }, 00:08:11.179 "claimed": false, 00:08:11.179 "zoned": false, 00:08:11.179 "supported_io_types": { 00:08:11.179 "read": true, 00:08:11.179 "write": true, 00:08:11.179 "unmap": true, 00:08:11.179 "flush": false, 00:08:11.179 "reset": true, 00:08:11.179 "nvme_admin": false, 00:08:11.179 "nvme_io": false, 00:08:11.179 "nvme_io_md": false, 00:08:11.179 "write_zeroes": true, 00:08:11.179 "zcopy": false, 00:08:11.179 "get_zone_info": false, 00:08:11.179 "zone_management": false, 00:08:11.179 "zone_append": false, 00:08:11.179 "compare": false, 00:08:11.179 "compare_and_write": false, 00:08:11.179 "abort": false, 00:08:11.179 "seek_hole": true, 00:08:11.179 "seek_data": true, 00:08:11.179 "copy": false, 00:08:11.179 "nvme_iov_md": false 00:08:11.179 }, 00:08:11.179 "driver_specific": { 00:08:11.179 "lvol": { 00:08:11.179 "lvol_store_uuid": "18daee01-80dc-45f5-9cb6-67de6a24371d", 00:08:11.179 "base_bdev": "aio_bdev", 00:08:11.179 "thin_provision": false, 00:08:11.179 "num_allocated_clusters": 38, 00:08:11.179 "snapshot": false, 00:08:11.179 "clone": false, 00:08:11.179 "esnap_clone": false 00:08:11.179 } 00:08:11.179 } 00:08:11.179 } 00:08:11.179 ] 00:08:11.179 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:08:11.179 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:11.179 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:11.440 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:11.440 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:11.440 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:11.701 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:11.701 11:31:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0ade14a4-e545-43b0-9e10-45d6d169b11a 00:08:11.702 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 18daee01-80dc-45f5-9cb6-67de6a24371d 00:08:11.962 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.223 00:08:12.223 real 0m15.905s 00:08:12.223 user 0m15.585s 00:08:12.223 sys 0m1.454s 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:12.223 ************************************ 00:08:12.223 END TEST lvs_grow_clean 00:08:12.223 ************************************ 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:12.223 ************************************ 00:08:12.223 START TEST lvs_grow_dirty 00:08:12.223 ************************************ 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:12.223 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:12.484 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:12.484 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:12.745 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c43491be-a214-4489-929d-7076716dd107 00:08:12.745 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:12.745 11:31:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:12.745 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:12.745 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:12.745 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c43491be-a214-4489-929d-7076716dd107 lvol 150 00:08:13.005 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:13.006 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.006 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:13.006 [2024-11-15 11:31:38.485167] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:13.006 [2024-11-15 11:31:38.485206] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:13.006 true 00:08:13.006 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:13.267 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:13.267 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:13.267 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:13.528 11:31:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:13.528 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:13.789 [2024-11-15 11:31:39.143066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.789 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=892177 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 892177 /var/tmp/bdevperf.sock 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 892177 ']' 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.050 11:31:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:14.050 [2024-11-15 11:31:39.359510] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:14.050 [2024-11-15 11:31:39.359558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid892177 ] 00:08:14.050 [2024-11-15 11:31:39.440509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.050 [2024-11-15 11:31:39.470441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.992 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.992 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:14.992 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:15.253 Nvme0n1 00:08:15.253 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:15.253 [ 00:08:15.253 { 00:08:15.253 "name": "Nvme0n1", 00:08:15.253 "aliases": [ 00:08:15.253 "e98a5838-c640-4758-8bae-960b4c26f3f1" 00:08:15.253 ], 00:08:15.253 "product_name": "NVMe disk", 00:08:15.253 "block_size": 4096, 00:08:15.253 "num_blocks": 38912, 00:08:15.254 "uuid": "e98a5838-c640-4758-8bae-960b4c26f3f1", 00:08:15.254 "numa_id": 0, 00:08:15.254 "assigned_rate_limits": { 00:08:15.254 "rw_ios_per_sec": 0, 00:08:15.254 "rw_mbytes_per_sec": 0, 00:08:15.254 "r_mbytes_per_sec": 0, 00:08:15.254 "w_mbytes_per_sec": 0 00:08:15.254 }, 00:08:15.254 "claimed": false, 00:08:15.254 "zoned": false, 00:08:15.254 "supported_io_types": { 00:08:15.254 "read": true, 00:08:15.254 "write": true, 00:08:15.254 "unmap": true, 00:08:15.254 "flush": true, 00:08:15.254 "reset": true, 00:08:15.254 "nvme_admin": true, 00:08:15.254 "nvme_io": true, 00:08:15.254 "nvme_io_md": false, 00:08:15.254 "write_zeroes": true, 00:08:15.254 "zcopy": false, 00:08:15.254 "get_zone_info": false, 00:08:15.254 "zone_management": false, 00:08:15.254 "zone_append": false, 00:08:15.254 "compare": true, 00:08:15.254 "compare_and_write": true, 00:08:15.254 "abort": true, 00:08:15.254 "seek_hole": false, 00:08:15.254 "seek_data": false, 00:08:15.254 "copy": true, 00:08:15.254 "nvme_iov_md": false 00:08:15.254 }, 00:08:15.254 "memory_domains": [ 00:08:15.254 { 00:08:15.254 "dma_device_id": "system", 00:08:15.254 "dma_device_type": 1 00:08:15.254 } 00:08:15.254 ], 00:08:15.254 "driver_specific": { 00:08:15.254 "nvme": [ 00:08:15.254 { 00:08:15.254 "trid": { 00:08:15.254 "trtype": "TCP", 00:08:15.254 "adrfam": "IPv4", 00:08:15.254 "traddr": "10.0.0.2", 00:08:15.254 "trsvcid": "4420", 00:08:15.254 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:15.254 }, 00:08:15.254 "ctrlr_data": { 00:08:15.254 "cntlid": 1, 00:08:15.254 "vendor_id": "0x8086", 00:08:15.254 "model_number": "SPDK bdev Controller", 00:08:15.254 "serial_number": "SPDK0", 00:08:15.254 "firmware_revision": "25.01", 00:08:15.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:15.254 "oacs": { 00:08:15.254 "security": 0, 00:08:15.254 "format": 0, 00:08:15.254 "firmware": 0, 00:08:15.254 "ns_manage": 0 00:08:15.254 }, 00:08:15.254 "multi_ctrlr": true, 00:08:15.254 "ana_reporting": false 00:08:15.254 }, 00:08:15.254 "vs": { 00:08:15.254 "nvme_version": "1.3" 00:08:15.254 }, 00:08:15.254 "ns_data": { 00:08:15.254 "id": 1, 00:08:15.254 "can_share": true 00:08:15.254 } 00:08:15.254 } 00:08:15.254 ], 00:08:15.254 "mp_policy": "active_passive" 00:08:15.254 } 00:08:15.254 } 00:08:15.254 ] 00:08:15.254 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=892386 00:08:15.254 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:15.254 11:31:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:15.515 Running I/O for 10 seconds... 00:08:16.457 Latency(us) 00:08:16.457 [2024-11-15T10:31:41.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.457 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.457 Nvme0n1 : 1.00 24769.00 96.75 0.00 0.00 0.00 0.00 0.00 00:08:16.457 [2024-11-15T10:31:41.955Z] =================================================================================================================== 00:08:16.457 [2024-11-15T10:31:41.955Z] Total : 24769.00 96.75 0.00 0.00 0.00 0.00 0.00 00:08:16.457 00:08:17.400 11:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c43491be-a214-4489-929d-7076716dd107 00:08:17.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.400 Nvme0n1 : 2.00 24927.50 97.37 0.00 0.00 0.00 0.00 0.00 00:08:17.400 [2024-11-15T10:31:42.898Z] =================================================================================================================== 00:08:17.400 [2024-11-15T10:31:42.898Z] Total : 24927.50 97.37 0.00 0.00 0.00 0.00 0.00 00:08:17.400 00:08:17.400 true 00:08:17.661 11:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:17.661 11:31:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:17.661 11:31:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:17.661 11:31:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:17.661 11:31:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 892386 00:08:18.607 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.607 Nvme0n1 : 3.00 24998.00 97.65 0.00 0.00 0.00 0.00 0.00 00:08:18.607 [2024-11-15T10:31:44.105Z] =================================================================================================================== 00:08:18.607 [2024-11-15T10:31:44.105Z] Total : 24998.00 97.65 0.00 0.00 0.00 0.00 0.00 00:08:18.607 00:08:19.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.550 Nvme0n1 : 4.00 25067.50 97.92 0.00 0.00 0.00 0.00 0.00 00:08:19.550 [2024-11-15T10:31:45.048Z] =================================================================================================================== 00:08:19.550 [2024-11-15T10:31:45.048Z] Total : 25067.50 97.92 0.00 0.00 0.00 0.00 0.00 00:08:19.550 00:08:20.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.493 Nvme0n1 : 5.00 25108.60 98.08 0.00 0.00 0.00 0.00 0.00 00:08:20.493 [2024-11-15T10:31:45.991Z] =================================================================================================================== 00:08:20.493 [2024-11-15T10:31:45.991Z] Total : 25108.60 98.08 0.00 0.00 0.00 0.00 0.00 00:08:20.493 00:08:21.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.439 Nvme0n1 : 6.00 25136.83 98.19 0.00 0.00 0.00 0.00 0.00 00:08:21.439 [2024-11-15T10:31:46.937Z] =================================================================================================================== 00:08:21.439 [2024-11-15T10:31:46.937Z] Total : 25136.83 98.19 0.00 0.00 0.00 0.00 0.00 00:08:21.439 00:08:22.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.612 Nvme0n1 : 7.00 25157.14 98.27 0.00 0.00 0.00 0.00 0.00 00:08:22.612 [2024-11-15T10:31:48.110Z] =================================================================================================================== 00:08:22.612 [2024-11-15T10:31:48.110Z] Total : 25157.14 98.27 0.00 0.00 0.00 0.00 0.00 00:08:22.612 00:08:23.553 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.553 Nvme0n1 : 8.00 25180.00 98.36 0.00 0.00 0.00 0.00 0.00 00:08:23.553 [2024-11-15T10:31:49.051Z] =================================================================================================================== 00:08:23.553 [2024-11-15T10:31:49.051Z] Total : 25180.00 98.36 0.00 0.00 0.00 0.00 0.00 00:08:23.553 00:08:24.497 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.497 Nvme0n1 : 9.00 25190.78 98.40 0.00 0.00 0.00 0.00 0.00 00:08:24.497 [2024-11-15T10:31:49.995Z] =================================================================================================================== 00:08:24.497 [2024-11-15T10:31:49.995Z] Total : 25190.78 98.40 0.00 0.00 0.00 0.00 0.00 00:08:24.497 00:08:25.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.438 Nvme0n1 : 10.00 25204.70 98.46 0.00 0.00 0.00 0.00 0.00 00:08:25.438 [2024-11-15T10:31:50.936Z] =================================================================================================================== 00:08:25.438 [2024-11-15T10:31:50.937Z] Total : 25204.70 98.46 0.00 0.00 0.00 0.00 0.00 00:08:25.439 00:08:25.439 00:08:25.439 Latency(us) 00:08:25.439 [2024-11-15T10:31:50.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.439 Nvme0n1 : 10.01 25203.62 98.45 0.00 0.00 5075.21 3017.39 12288.00 00:08:25.439 [2024-11-15T10:31:50.937Z] =================================================================================================================== 00:08:25.439 [2024-11-15T10:31:50.937Z] Total : 25203.62 98.45 0.00 0.00 5075.21 3017.39 12288.00 00:08:25.439 { 00:08:25.439 "results": [ 00:08:25.439 { 00:08:25.439 "job": "Nvme0n1", 00:08:25.439 "core_mask": "0x2", 00:08:25.439 "workload": "randwrite", 00:08:25.439 "status": "finished", 00:08:25.439 "queue_depth": 128, 00:08:25.439 "io_size": 4096, 00:08:25.439 "runtime": 10.005506, 00:08:25.439 "iops": 25203.622885239387, 00:08:25.439 "mibps": 98.45165189546636, 00:08:25.439 "io_failed": 0, 00:08:25.439 "io_timeout": 0, 00:08:25.439 "avg_latency_us": 5075.212572856151, 00:08:25.439 "min_latency_us": 3017.3866666666668, 00:08:25.439 "max_latency_us": 12288.0 00:08:25.439 } 00:08:25.439 ], 00:08:25.439 "core_count": 1 00:08:25.439 } 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 892177 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 892177 ']' 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 892177 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 892177 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 892177' 00:08:25.439 killing process with pid 892177 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 892177 00:08:25.439 Received shutdown signal, test time was about 10.000000 seconds 00:08:25.439 00:08:25.439 Latency(us) 00:08:25.439 [2024-11-15T10:31:50.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:25.439 [2024-11-15T10:31:50.937Z] =================================================================================================================== 00:08:25.439 [2024-11-15T10:31:50.937Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:25.439 11:31:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 892177 00:08:25.699 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:25.699 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:25.959 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:25.959 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 888350 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 888350 00:08:26.219 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 888350 Killed "${NVMF_APP[@]}" "$@" 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=894547 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 894547 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 894547 ']' 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.219 11:31:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:26.219 [2024-11-15 11:31:51.615992] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:26.219 [2024-11-15 11:31:51.616045] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.219 [2024-11-15 11:31:51.708565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.479 [2024-11-15 11:31:51.737573] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.479 [2024-11-15 11:31:51.737599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.479 [2024-11-15 11:31:51.737604] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.479 [2024-11-15 11:31:51.737609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.479 [2024-11-15 11:31:51.737613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.479 [2024-11-15 11:31:51.738088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.051 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:27.312 [2024-11-15 11:31:52.596418] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:27.312 [2024-11-15 11:31:52.596491] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:27.312 [2024-11-15 11:31:52.596513] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:27.312 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e98a5838-c640-4758-8bae-960b4c26f3f1 -t 2000 00:08:27.572 [ 00:08:27.572 { 00:08:27.572 "name": "e98a5838-c640-4758-8bae-960b4c26f3f1", 00:08:27.572 "aliases": [ 00:08:27.572 "lvs/lvol" 00:08:27.572 ], 00:08:27.572 "product_name": "Logical Volume", 00:08:27.572 "block_size": 4096, 00:08:27.572 "num_blocks": 38912, 00:08:27.572 "uuid": "e98a5838-c640-4758-8bae-960b4c26f3f1", 00:08:27.572 "assigned_rate_limits": { 00:08:27.572 "rw_ios_per_sec": 0, 00:08:27.572 "rw_mbytes_per_sec": 0, 00:08:27.573 "r_mbytes_per_sec": 0, 00:08:27.573 "w_mbytes_per_sec": 0 00:08:27.573 }, 00:08:27.573 "claimed": false, 00:08:27.573 "zoned": false, 00:08:27.573 "supported_io_types": { 00:08:27.573 "read": true, 00:08:27.573 "write": true, 00:08:27.573 "unmap": true, 00:08:27.573 "flush": false, 00:08:27.573 "reset": true, 00:08:27.573 "nvme_admin": false, 00:08:27.573 "nvme_io": false, 00:08:27.573 "nvme_io_md": false, 00:08:27.573 "write_zeroes": true, 00:08:27.573 "zcopy": false, 00:08:27.573 "get_zone_info": false, 00:08:27.573 "zone_management": false, 00:08:27.573 "zone_append": false, 00:08:27.573 "compare": false, 00:08:27.573 "compare_and_write": false, 00:08:27.573 "abort": false, 00:08:27.573 "seek_hole": true, 00:08:27.573 "seek_data": true, 00:08:27.573 "copy": false, 00:08:27.573 "nvme_iov_md": false 00:08:27.573 }, 00:08:27.573 "driver_specific": { 00:08:27.573 "lvol": { 00:08:27.573 "lvol_store_uuid": "c43491be-a214-4489-929d-7076716dd107", 00:08:27.573 "base_bdev": "aio_bdev", 00:08:27.573 "thin_provision": false, 00:08:27.573 "num_allocated_clusters": 38, 00:08:27.573 "snapshot": false, 00:08:27.573 "clone": false, 00:08:27.573 "esnap_clone": false 00:08:27.573 } 00:08:27.573 } 00:08:27.573 } 00:08:27.573 ] 00:08:27.573 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:27.573 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:27.573 11:31:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:27.833 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:27.833 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:27.833 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:27.833 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:27.833 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:28.093 [2024-11-15 11:31:53.424968] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:28.093 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:28.353 request: 00:08:28.353 { 00:08:28.353 "uuid": "c43491be-a214-4489-929d-7076716dd107", 00:08:28.353 "method": "bdev_lvol_get_lvstores", 00:08:28.353 "req_id": 1 00:08:28.353 } 00:08:28.353 Got JSON-RPC error response 00:08:28.353 response: 00:08:28.353 { 00:08:28.353 "code": -19, 00:08:28.353 "message": "No such device" 00:08:28.353 } 00:08:28.353 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:28.353 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.353 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.353 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.353 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:28.353 aio_bdev 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:28.354 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:28.614 11:31:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e98a5838-c640-4758-8bae-960b4c26f3f1 -t 2000 00:08:28.875 [ 00:08:28.875 { 00:08:28.875 "name": "e98a5838-c640-4758-8bae-960b4c26f3f1", 00:08:28.875 "aliases": [ 00:08:28.875 "lvs/lvol" 00:08:28.875 ], 00:08:28.875 "product_name": "Logical Volume", 00:08:28.875 "block_size": 4096, 00:08:28.875 "num_blocks": 38912, 00:08:28.875 "uuid": "e98a5838-c640-4758-8bae-960b4c26f3f1", 00:08:28.875 "assigned_rate_limits": { 00:08:28.875 "rw_ios_per_sec": 0, 00:08:28.875 "rw_mbytes_per_sec": 0, 00:08:28.875 "r_mbytes_per_sec": 0, 00:08:28.875 "w_mbytes_per_sec": 0 00:08:28.875 }, 00:08:28.875 "claimed": false, 00:08:28.875 "zoned": false, 00:08:28.875 "supported_io_types": { 00:08:28.875 "read": true, 00:08:28.875 "write": true, 00:08:28.875 "unmap": true, 00:08:28.875 "flush": false, 00:08:28.875 "reset": true, 00:08:28.875 "nvme_admin": false, 00:08:28.875 "nvme_io": false, 00:08:28.875 "nvme_io_md": false, 00:08:28.875 "write_zeroes": true, 00:08:28.875 "zcopy": false, 00:08:28.875 "get_zone_info": false, 00:08:28.875 "zone_management": false, 00:08:28.876 "zone_append": false, 00:08:28.876 "compare": false, 00:08:28.876 "compare_and_write": false, 00:08:28.876 "abort": false, 00:08:28.876 "seek_hole": true, 00:08:28.876 "seek_data": true, 00:08:28.876 "copy": false, 00:08:28.876 "nvme_iov_md": false 00:08:28.876 }, 00:08:28.876 "driver_specific": { 00:08:28.876 "lvol": { 00:08:28.876 "lvol_store_uuid": "c43491be-a214-4489-929d-7076716dd107", 00:08:28.876 "base_bdev": "aio_bdev", 00:08:28.876 "thin_provision": false, 00:08:28.876 "num_allocated_clusters": 38, 00:08:28.876 "snapshot": false, 00:08:28.876 "clone": false, 00:08:28.876 "esnap_clone": false 00:08:28.876 } 00:08:28.876 } 00:08:28.876 } 00:08:28.876 ] 00:08:28.876 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:28.876 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:28.876 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:28.876 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:28.876 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c43491be-a214-4489-929d-7076716dd107 00:08:28.876 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:29.137 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:29.137 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e98a5838-c640-4758-8bae-960b4c26f3f1 00:08:29.400 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c43491be-a214-4489-929d-7076716dd107 00:08:29.400 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.660 11:31:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:29.660 00:08:29.660 real 0m17.408s 00:08:29.660 user 0m45.810s 00:08:29.660 sys 0m2.970s 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:29.660 ************************************ 00:08:29.660 END TEST lvs_grow_dirty 00:08:29.660 ************************************ 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:29.660 nvmf_trace.0 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:29.660 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:29.661 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:29.661 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:29.661 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:29.661 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:29.661 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:29.661 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:29.661 rmmod nvme_tcp 00:08:29.661 rmmod nvme_fabrics 00:08:29.661 rmmod nvme_keyring 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 894547 ']' 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 894547 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 894547 ']' 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 894547 00:08:29.921 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 894547 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 894547' 00:08:29.922 killing process with pid 894547 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 894547 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 894547 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.922 11:31:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:32.469 00:08:32.469 real 0m44.626s 00:08:32.469 user 1m7.734s 00:08:32.469 sys 0m10.473s 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.469 ************************************ 00:08:32.469 END TEST nvmf_lvs_grow 00:08:32.469 ************************************ 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.469 ************************************ 00:08:32.469 START TEST nvmf_bdev_io_wait 00:08:32.469 ************************************ 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:32.469 * Looking for test storage... 00:08:32.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.469 --rc genhtml_branch_coverage=1 00:08:32.469 --rc genhtml_function_coverage=1 00:08:32.469 --rc genhtml_legend=1 00:08:32.469 --rc geninfo_all_blocks=1 00:08:32.469 --rc geninfo_unexecuted_blocks=1 00:08:32.469 00:08:32.469 ' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.469 --rc genhtml_branch_coverage=1 00:08:32.469 --rc genhtml_function_coverage=1 00:08:32.469 --rc genhtml_legend=1 00:08:32.469 --rc geninfo_all_blocks=1 00:08:32.469 --rc geninfo_unexecuted_blocks=1 00:08:32.469 00:08:32.469 ' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.469 --rc genhtml_branch_coverage=1 00:08:32.469 --rc genhtml_function_coverage=1 00:08:32.469 --rc genhtml_legend=1 00:08:32.469 --rc geninfo_all_blocks=1 00:08:32.469 --rc geninfo_unexecuted_blocks=1 00:08:32.469 00:08:32.469 ' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:32.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.469 --rc genhtml_branch_coverage=1 00:08:32.469 --rc genhtml_function_coverage=1 00:08:32.469 --rc genhtml_legend=1 00:08:32.469 --rc geninfo_all_blocks=1 00:08:32.469 --rc geninfo_unexecuted_blocks=1 00:08:32.469 00:08:32.469 ' 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.469 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:32.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:32.470 11:31:57 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:40.608 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:40.609 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:40.609 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:40.609 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:40.609 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:40.609 11:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:40.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:40.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.494 ms 00:08:40.609 00:08:40.609 --- 10.0.0.2 ping statistics --- 00:08:40.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.609 rtt min/avg/max/mdev = 0.494/0.494/0.494/0.000 ms 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:40.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:40.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:08:40.609 00:08:40.609 --- 10.0.0.1 ping statistics --- 00:08:40.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:40.609 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=899624 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 899624 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 899624 ']' 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.609 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.610 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.610 11:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.610 [2024-11-15 11:32:05.331397] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:40.610 [2024-11-15 11:32:05.331460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.610 [2024-11-15 11:32:05.430703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.610 [2024-11-15 11:32:05.485476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.610 [2024-11-15 11:32:05.485528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.610 [2024-11-15 11:32:05.485537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.610 [2024-11-15 11:32:05.485544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.610 [2024-11-15 11:32:05.485551] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.610 [2024-11-15 11:32:05.487685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.610 [2024-11-15 11:32:05.487847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.610 [2024-11-15 11:32:05.488010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.610 [2024-11-15 11:32:05.488010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 [2024-11-15 11:32:06.283331] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 Malloc0 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:40.872 [2024-11-15 11:32:06.349031] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=899742 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=899745 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.872 { 00:08:40.872 "params": { 00:08:40.872 "name": "Nvme$subsystem", 00:08:40.872 "trtype": "$TEST_TRANSPORT", 00:08:40.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.872 "adrfam": "ipv4", 00:08:40.872 "trsvcid": "$NVMF_PORT", 00:08:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.872 "hdgst": ${hdgst:-false}, 00:08:40.872 "ddgst": ${ddgst:-false} 00:08:40.872 }, 00:08:40.872 "method": "bdev_nvme_attach_controller" 00:08:40.872 } 00:08:40.872 EOF 00:08:40.872 )") 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=899747 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=899751 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.872 { 00:08:40.872 "params": { 00:08:40.872 "name": "Nvme$subsystem", 00:08:40.872 "trtype": "$TEST_TRANSPORT", 00:08:40.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.872 "adrfam": "ipv4", 00:08:40.872 "trsvcid": "$NVMF_PORT", 00:08:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.872 "hdgst": ${hdgst:-false}, 00:08:40.872 "ddgst": ${ddgst:-false} 00:08:40.872 }, 00:08:40.872 "method": "bdev_nvme_attach_controller" 00:08:40.872 } 00:08:40.872 EOF 00:08:40.872 )") 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.872 { 00:08:40.872 "params": { 00:08:40.872 "name": "Nvme$subsystem", 00:08:40.872 "trtype": "$TEST_TRANSPORT", 00:08:40.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.872 "adrfam": "ipv4", 00:08:40.872 "trsvcid": "$NVMF_PORT", 00:08:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.872 "hdgst": ${hdgst:-false}, 00:08:40.872 "ddgst": ${ddgst:-false} 00:08:40.872 }, 00:08:40.872 "method": "bdev_nvme_attach_controller" 00:08:40.872 } 00:08:40.872 EOF 00:08:40.872 )") 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:40.872 { 00:08:40.872 "params": { 00:08:40.872 "name": "Nvme$subsystem", 00:08:40.872 "trtype": "$TEST_TRANSPORT", 00:08:40.872 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:40.872 "adrfam": "ipv4", 00:08:40.872 "trsvcid": "$NVMF_PORT", 00:08:40.872 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:40.872 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:40.872 "hdgst": ${hdgst:-false}, 00:08:40.872 "ddgst": ${ddgst:-false} 00:08:40.872 }, 00:08:40.872 "method": "bdev_nvme_attach_controller" 00:08:40.872 } 00:08:40.872 EOF 00:08:40.872 )") 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.872 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 899742 00:08:40.873 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:40.873 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:41.133 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:41.133 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:41.133 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.134 "params": { 00:08:41.134 "name": "Nvme1", 00:08:41.134 "trtype": "tcp", 00:08:41.134 "traddr": "10.0.0.2", 00:08:41.134 "adrfam": "ipv4", 00:08:41.134 "trsvcid": "4420", 00:08:41.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.134 "hdgst": false, 00:08:41.134 "ddgst": false 00:08:41.134 }, 00:08:41.134 "method": "bdev_nvme_attach_controller" 00:08:41.134 }' 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.134 "params": { 00:08:41.134 "name": "Nvme1", 00:08:41.134 "trtype": "tcp", 00:08:41.134 "traddr": "10.0.0.2", 00:08:41.134 "adrfam": "ipv4", 00:08:41.134 "trsvcid": "4420", 00:08:41.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.134 "hdgst": false, 00:08:41.134 "ddgst": false 00:08:41.134 }, 00:08:41.134 "method": "bdev_nvme_attach_controller" 00:08:41.134 }' 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.134 "params": { 00:08:41.134 "name": "Nvme1", 00:08:41.134 "trtype": "tcp", 00:08:41.134 "traddr": "10.0.0.2", 00:08:41.134 "adrfam": "ipv4", 00:08:41.134 "trsvcid": "4420", 00:08:41.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.134 "hdgst": false, 00:08:41.134 "ddgst": false 00:08:41.134 }, 00:08:41.134 "method": "bdev_nvme_attach_controller" 00:08:41.134 }' 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:41.134 11:32:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:41.134 "params": { 00:08:41.134 "name": "Nvme1", 00:08:41.134 "trtype": "tcp", 00:08:41.134 "traddr": "10.0.0.2", 00:08:41.134 "adrfam": "ipv4", 00:08:41.134 "trsvcid": "4420", 00:08:41.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:41.134 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:41.134 "hdgst": false, 00:08:41.134 "ddgst": false 00:08:41.134 }, 00:08:41.134 "method": "bdev_nvme_attach_controller" 00:08:41.134 }' 00:08:41.134 [2024-11-15 11:32:06.410665] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:41.134 [2024-11-15 11:32:06.410728] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:41.134 [2024-11-15 11:32:06.412182] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:41.134 [2024-11-15 11:32:06.412257] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:41.134 [2024-11-15 11:32:06.414769] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:41.134 [2024-11-15 11:32:06.414858] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:41.134 [2024-11-15 11:32:06.420334] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:41.134 [2024-11-15 11:32:06.420418] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:41.395 [2024-11-15 11:32:06.642068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.395 [2024-11-15 11:32:06.682572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:41.395 [2024-11-15 11:32:06.738045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.395 [2024-11-15 11:32:06.781073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:41.395 [2024-11-15 11:32:06.804582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.395 [2024-11-15 11:32:06.839798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:41.395 [2024-11-15 11:32:06.867860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.655 [2024-11-15 11:32:06.903155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:41.655 Running I/O for 1 seconds... 00:08:41.655 Running I/O for 1 seconds... 00:08:41.655 Running I/O for 1 seconds... 00:08:41.916 Running I/O for 1 seconds... 00:08:42.748 15208.00 IOPS, 59.41 MiB/s 00:08:42.748 Latency(us) 00:08:42.748 [2024-11-15T10:32:08.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.749 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:42.749 Nvme1n1 : 1.01 15290.93 59.73 0.00 0.00 8348.78 3399.68 16056.32 00:08:42.749 [2024-11-15T10:32:08.247Z] =================================================================================================================== 00:08:42.749 [2024-11-15T10:32:08.247Z] Total : 15290.93 59.73 0.00 0.00 8348.78 3399.68 16056.32 00:08:42.749 5885.00 IOPS, 22.99 MiB/s 00:08:42.749 Latency(us) 00:08:42.749 [2024-11-15T10:32:08.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.749 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:42.749 Nvme1n1 : 1.02 5901.87 23.05 0.00 0.00 21431.06 11414.19 27962.03 00:08:42.749 [2024-11-15T10:32:08.247Z] =================================================================================================================== 00:08:42.749 [2024-11-15T10:32:08.247Z] Total : 5901.87 23.05 0.00 0.00 21431.06 11414.19 27962.03 00:08:42.749 185984.00 IOPS, 726.50 MiB/s 00:08:42.749 Latency(us) 00:08:42.749 [2024-11-15T10:32:08.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.749 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:42.749 Nvme1n1 : 1.00 185607.98 725.03 0.00 0.00 685.41 317.44 2007.04 00:08:42.749 [2024-11-15T10:32:08.247Z] =================================================================================================================== 00:08:42.749 [2024-11-15T10:32:08.247Z] Total : 185607.98 725.03 0.00 0.00 685.41 317.44 2007.04 00:08:42.749 6239.00 IOPS, 24.37 MiB/s 00:08:42.749 Latency(us) 00:08:42.749 [2024-11-15T10:32:08.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.749 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:42.749 Nvme1n1 : 1.01 6351.23 24.81 0.00 0.00 20086.80 4887.89 46093.65 00:08:42.749 [2024-11-15T10:32:08.247Z] =================================================================================================================== 00:08:42.749 [2024-11-15T10:32:08.247Z] Total : 6351.23 24.81 0.00 0.00 20086.80 4887.89 46093.65 00:08:43.009 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 899745 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 899747 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 899751 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.010 rmmod nvme_tcp 00:08:43.010 rmmod nvme_fabrics 00:08:43.010 rmmod nvme_keyring 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 899624 ']' 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 899624 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 899624 ']' 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 899624 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 899624 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 899624' 00:08:43.010 killing process with pid 899624 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 899624 00:08:43.010 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 899624 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.271 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.272 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.272 11:32:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.184 00:08:45.184 real 0m13.096s 00:08:45.184 user 0m19.696s 00:08:45.184 sys 0m7.380s 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:45.184 ************************************ 00:08:45.184 END TEST nvmf_bdev_io_wait 00:08:45.184 ************************************ 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:45.184 11:32:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.446 ************************************ 00:08:45.446 START TEST nvmf_queue_depth 00:08:45.446 ************************************ 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:45.446 * Looking for test storage... 00:08:45.446 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:45.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.446 --rc genhtml_branch_coverage=1 00:08:45.446 --rc genhtml_function_coverage=1 00:08:45.446 --rc genhtml_legend=1 00:08:45.446 --rc geninfo_all_blocks=1 00:08:45.446 --rc geninfo_unexecuted_blocks=1 00:08:45.446 00:08:45.446 ' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:45.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.446 --rc genhtml_branch_coverage=1 00:08:45.446 --rc genhtml_function_coverage=1 00:08:45.446 --rc genhtml_legend=1 00:08:45.446 --rc geninfo_all_blocks=1 00:08:45.446 --rc geninfo_unexecuted_blocks=1 00:08:45.446 00:08:45.446 ' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:45.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.446 --rc genhtml_branch_coverage=1 00:08:45.446 --rc genhtml_function_coverage=1 00:08:45.446 --rc genhtml_legend=1 00:08:45.446 --rc geninfo_all_blocks=1 00:08:45.446 --rc geninfo_unexecuted_blocks=1 00:08:45.446 00:08:45.446 ' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:45.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.446 --rc genhtml_branch_coverage=1 00:08:45.446 --rc genhtml_function_coverage=1 00:08:45.446 --rc genhtml_legend=1 00:08:45.446 --rc geninfo_all_blocks=1 00:08:45.446 --rc geninfo_unexecuted_blocks=1 00:08:45.446 00:08:45.446 ' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.446 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.447 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.447 11:32:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:53.584 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:53.585 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:53.585 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:53.585 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:53.585 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:53.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:08:53.585 00:08:53.585 --- 10.0.0.2 ping statistics --- 00:08:53.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.585 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:53.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:08:53.585 00:08:53.585 --- 10.0.0.1 ping statistics --- 00:08:53.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.585 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=904391 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 904391 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 904391 ']' 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.585 11:32:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:53.585 [2024-11-15 11:32:18.499042] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:53.585 [2024-11-15 11:32:18.499105] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.585 [2024-11-15 11:32:18.602971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.585 [2024-11-15 11:32:18.653469] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.585 [2024-11-15 11:32:18.653516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.585 [2024-11-15 11:32:18.653525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.585 [2024-11-15 11:32:18.653532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.585 [2024-11-15 11:32:18.653538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.585 [2024-11-15 11:32:18.654305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.846 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.846 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:53.846 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.846 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:53.846 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 [2024-11-15 11:32:19.361860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 Malloc0 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 [2024-11-15 11:32:19.422914] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=904718 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 904718 /var/tmp/bdevperf.sock 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 904718 ']' 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:54.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.107 11:32:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:54.107 [2024-11-15 11:32:19.481921] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:08:54.107 [2024-11-15 11:32:19.481982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid904718 ] 00:08:54.107 [2024-11-15 11:32:19.574616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.368 [2024-11-15 11:32:19.627829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.940 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.940 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:54.940 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:54.940 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.940 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:55.200 NVMe0n1 00:08:55.200 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.200 11:32:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:55.200 Running I/O for 10 seconds... 00:08:57.528 8465.00 IOPS, 33.07 MiB/s [2024-11-15T10:32:23.967Z] 9706.50 IOPS, 37.92 MiB/s [2024-11-15T10:32:24.906Z] 9903.33 IOPS, 38.68 MiB/s [2024-11-15T10:32:25.845Z] 10400.00 IOPS, 40.62 MiB/s [2024-11-15T10:32:26.783Z] 10814.80 IOPS, 42.25 MiB/s [2024-11-15T10:32:27.723Z] 11091.33 IOPS, 43.33 MiB/s [2024-11-15T10:32:29.107Z] 11266.86 IOPS, 44.01 MiB/s [2024-11-15T10:32:29.678Z] 11452.75 IOPS, 44.74 MiB/s [2024-11-15T10:32:31.104Z] 11605.33 IOPS, 45.33 MiB/s [2024-11-15T10:32:31.104Z] 11716.20 IOPS, 45.77 MiB/s 00:09:05.606 Latency(us) 00:09:05.606 [2024-11-15T10:32:31.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.606 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:05.606 Verification LBA range: start 0x0 length 0x4000 00:09:05.606 NVMe0n1 : 10.05 11753.07 45.91 0.00 0.00 86788.69 11578.03 76458.67 00:09:05.606 [2024-11-15T10:32:31.104Z] =================================================================================================================== 00:09:05.606 [2024-11-15T10:32:31.104Z] Total : 11753.07 45.91 0.00 0.00 86788.69 11578.03 76458.67 00:09:05.606 { 00:09:05.606 "results": [ 00:09:05.606 { 00:09:05.606 "job": "NVMe0n1", 00:09:05.606 "core_mask": "0x1", 00:09:05.606 "workload": "verify", 00:09:05.606 "status": "finished", 00:09:05.606 "verify_range": { 00:09:05.606 "start": 0, 00:09:05.606 "length": 16384 00:09:05.606 }, 00:09:05.606 "queue_depth": 1024, 00:09:05.606 "io_size": 4096, 00:09:05.606 "runtime": 10.048782, 00:09:05.606 "iops": 11753.066192499748, 00:09:05.606 "mibps": 45.91041481445214, 00:09:05.606 "io_failed": 0, 00:09:05.606 "io_timeout": 0, 00:09:05.606 "avg_latency_us": 86788.69399805821, 00:09:05.606 "min_latency_us": 11578.026666666667, 00:09:05.606 "max_latency_us": 76458.66666666667 00:09:05.606 } 00:09:05.606 ], 00:09:05.606 "core_count": 1 00:09:05.606 } 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 904718 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 904718 ']' 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 904718 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 904718 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 904718' 00:09:05.606 killing process with pid 904718 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 904718 00:09:05.606 Received shutdown signal, test time was about 10.000000 seconds 00:09:05.606 00:09:05.606 Latency(us) 00:09:05.606 [2024-11-15T10:32:31.104Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:05.606 [2024-11-15T10:32:31.104Z] =================================================================================================================== 00:09:05.606 [2024-11-15T10:32:31.104Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 904718 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.606 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.606 rmmod nvme_tcp 00:09:05.607 rmmod nvme_fabrics 00:09:05.607 rmmod nvme_keyring 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 904391 ']' 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 904391 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 904391 ']' 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 904391 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:09:05.607 11:32:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.607 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 904391 00:09:05.607 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:05.607 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:05.607 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 904391' 00:09:05.607 killing process with pid 904391 00:09:05.607 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 904391 00:09:05.607 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 904391 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.011 11:32:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:08.016 00:09:08.016 real 0m22.561s 00:09:08.016 user 0m26.010s 00:09:08.016 sys 0m6.992s 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.016 ************************************ 00:09:08.016 END TEST nvmf_queue_depth 00:09:08.016 ************************************ 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.016 ************************************ 00:09:08.016 START TEST nvmf_target_multipath 00:09:08.016 ************************************ 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:08.016 * Looking for test storage... 00:09:08.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:08.016 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:08.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.277 --rc genhtml_branch_coverage=1 00:09:08.277 --rc genhtml_function_coverage=1 00:09:08.277 --rc genhtml_legend=1 00:09:08.277 --rc geninfo_all_blocks=1 00:09:08.277 --rc geninfo_unexecuted_blocks=1 00:09:08.277 00:09:08.277 ' 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:08.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.277 --rc genhtml_branch_coverage=1 00:09:08.277 --rc genhtml_function_coverage=1 00:09:08.277 --rc genhtml_legend=1 00:09:08.277 --rc geninfo_all_blocks=1 00:09:08.277 --rc geninfo_unexecuted_blocks=1 00:09:08.277 00:09:08.277 ' 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:08.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.277 --rc genhtml_branch_coverage=1 00:09:08.277 --rc genhtml_function_coverage=1 00:09:08.277 --rc genhtml_legend=1 00:09:08.277 --rc geninfo_all_blocks=1 00:09:08.277 --rc geninfo_unexecuted_blocks=1 00:09:08.277 00:09:08.277 ' 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:08.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:08.277 --rc genhtml_branch_coverage=1 00:09:08.277 --rc genhtml_function_coverage=1 00:09:08.277 --rc genhtml_legend=1 00:09:08.277 --rc geninfo_all_blocks=1 00:09:08.277 --rc geninfo_unexecuted_blocks=1 00:09:08.277 00:09:08.277 ' 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.277 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:08.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:08.278 11:32:33 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:16.420 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:16.420 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:16.420 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:16.420 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.420 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:16.421 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:16.421 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:16.421 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:16.421 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:16.421 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:16.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:16.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.598 ms 00:09:16.421 00:09:16.421 --- 10.0.0.2 ping statistics --- 00:09:16.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.421 rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms 00:09:16.421 11:32:40 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:16.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:16.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:09:16.421 00:09:16.421 --- 10.0.0.1 ping statistics --- 00:09:16.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:16.421 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:16.421 only one NIC for nvmf test 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.421 rmmod nvme_tcp 00:09:16.421 rmmod nvme_fabrics 00:09:16.421 rmmod nvme_keyring 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.421 11:32:41 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:17.805 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:17.806 00:09:17.806 real 0m9.926s 00:09:17.806 user 0m2.100s 00:09:17.806 sys 0m5.774s 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.806 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:17.806 ************************************ 00:09:17.806 END TEST nvmf_target_multipath 00:09:17.806 ************************************ 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.068 ************************************ 00:09:18.068 START TEST nvmf_zcopy 00:09:18.068 ************************************ 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:18.068 * Looking for test storage... 00:09:18.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:18.068 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:18.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.069 --rc genhtml_branch_coverage=1 00:09:18.069 --rc genhtml_function_coverage=1 00:09:18.069 --rc genhtml_legend=1 00:09:18.069 --rc geninfo_all_blocks=1 00:09:18.069 --rc geninfo_unexecuted_blocks=1 00:09:18.069 00:09:18.069 ' 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:18.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.069 --rc genhtml_branch_coverage=1 00:09:18.069 --rc genhtml_function_coverage=1 00:09:18.069 --rc genhtml_legend=1 00:09:18.069 --rc geninfo_all_blocks=1 00:09:18.069 --rc geninfo_unexecuted_blocks=1 00:09:18.069 00:09:18.069 ' 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:18.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.069 --rc genhtml_branch_coverage=1 00:09:18.069 --rc genhtml_function_coverage=1 00:09:18.069 --rc genhtml_legend=1 00:09:18.069 --rc geninfo_all_blocks=1 00:09:18.069 --rc geninfo_unexecuted_blocks=1 00:09:18.069 00:09:18.069 ' 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:18.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.069 --rc genhtml_branch_coverage=1 00:09:18.069 --rc genhtml_function_coverage=1 00:09:18.069 --rc genhtml_legend=1 00:09:18.069 --rc geninfo_all_blocks=1 00:09:18.069 --rc geninfo_unexecuted_blocks=1 00:09:18.069 00:09:18.069 ' 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.069 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:18.330 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:18.330 11:32:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:26.473 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:26.474 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:26.474 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:26.474 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:26.474 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:26.474 11:32:50 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.474 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.474 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.474 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:26.474 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:26.474 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.474 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:09:26.474 00:09:26.474 --- 10.0.0.2 ping statistics --- 00:09:26.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.474 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:09:26.474 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.474 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.474 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:09:26.474 00:09:26.474 --- 10.0.0.1 ping statistics --- 00:09:26.474 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.474 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=915422 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 915422 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 915422 ']' 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:26.475 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.475 [2024-11-15 11:32:51.149057] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:26.475 [2024-11-15 11:32:51.149120] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.475 [2024-11-15 11:32:51.247327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.475 [2024-11-15 11:32:51.297327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:26.475 [2024-11-15 11:32:51.297376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:26.475 [2024-11-15 11:32:51.297384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:26.475 [2024-11-15 11:32:51.297392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:26.475 [2024-11-15 11:32:51.297398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:26.475 [2024-11-15 11:32:51.298197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.736 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:26.736 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:09:26.736 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:26.736 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:26.736 11:32:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.736 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:26.736 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 [2024-11-15 11:32:52.026148] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 [2024-11-15 11:32:52.050417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 malloc0 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:26.737 { 00:09:26.737 "params": { 00:09:26.737 "name": "Nvme$subsystem", 00:09:26.737 "trtype": "$TEST_TRANSPORT", 00:09:26.737 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:26.737 "adrfam": "ipv4", 00:09:26.737 "trsvcid": "$NVMF_PORT", 00:09:26.737 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:26.737 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:26.737 "hdgst": ${hdgst:-false}, 00:09:26.737 "ddgst": ${ddgst:-false} 00:09:26.737 }, 00:09:26.737 "method": "bdev_nvme_attach_controller" 00:09:26.737 } 00:09:26.737 EOF 00:09:26.737 )") 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:26.737 11:32:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:26.737 "params": { 00:09:26.737 "name": "Nvme1", 00:09:26.737 "trtype": "tcp", 00:09:26.737 "traddr": "10.0.0.2", 00:09:26.737 "adrfam": "ipv4", 00:09:26.737 "trsvcid": "4420", 00:09:26.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:26.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:26.737 "hdgst": false, 00:09:26.737 "ddgst": false 00:09:26.737 }, 00:09:26.737 "method": "bdev_nvme_attach_controller" 00:09:26.737 }' 00:09:26.737 [2024-11-15 11:32:52.153291] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:26.737 [2024-11-15 11:32:52.153365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid915769 ] 00:09:26.999 [2024-11-15 11:32:52.241000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.999 [2024-11-15 11:32:52.295487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.260 Running I/O for 10 seconds... 00:09:29.145 6433.00 IOPS, 50.26 MiB/s [2024-11-15T10:32:56.026Z] 6488.50 IOPS, 50.69 MiB/s [2024-11-15T10:32:56.969Z] 6982.00 IOPS, 54.55 MiB/s [2024-11-15T10:32:57.912Z] 7671.00 IOPS, 59.93 MiB/s [2024-11-15T10:32:58.854Z] 8084.20 IOPS, 63.16 MiB/s [2024-11-15T10:32:59.814Z] 8360.17 IOPS, 65.31 MiB/s [2024-11-15T10:33:00.755Z] 8556.86 IOPS, 66.85 MiB/s [2024-11-15T10:33:01.696Z] 8701.38 IOPS, 67.98 MiB/s [2024-11-15T10:33:02.637Z] 8815.44 IOPS, 68.87 MiB/s [2024-11-15T10:33:02.637Z] 8906.50 IOPS, 69.58 MiB/s 00:09:37.139 Latency(us) 00:09:37.139 [2024-11-15T10:33:02.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.139 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:37.139 Verification LBA range: start 0x0 length 0x1000 00:09:37.139 Nvme1n1 : 10.01 8908.53 69.60 0.00 0.00 14321.21 880.64 28835.84 00:09:37.139 [2024-11-15T10:33:02.637Z] =================================================================================================================== 00:09:37.139 [2024-11-15T10:33:02.637Z] Total : 8908.53 69.60 0.00 0.00 14321.21 880.64 28835.84 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=917892 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.400 { 00:09:37.400 "params": { 00:09:37.400 "name": "Nvme$subsystem", 00:09:37.400 "trtype": "$TEST_TRANSPORT", 00:09:37.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.400 "adrfam": "ipv4", 00:09:37.400 "trsvcid": "$NVMF_PORT", 00:09:37.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.400 "hdgst": ${hdgst:-false}, 00:09:37.400 "ddgst": ${ddgst:-false} 00:09:37.400 }, 00:09:37.400 "method": "bdev_nvme_attach_controller" 00:09:37.400 } 00:09:37.400 EOF 00:09:37.400 )") 00:09:37.400 [2024-11-15 11:33:02.728902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:37.400 [2024-11-15 11:33:02.728930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:37.400 11:33:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.400 "params": { 00:09:37.400 "name": "Nvme1", 00:09:37.400 "trtype": "tcp", 00:09:37.400 "traddr": "10.0.0.2", 00:09:37.400 "adrfam": "ipv4", 00:09:37.400 "trsvcid": "4420", 00:09:37.400 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.400 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.400 "hdgst": false, 00:09:37.400 "ddgst": false 00:09:37.400 }, 00:09:37.400 "method": "bdev_nvme_attach_controller" 00:09:37.400 }' 00:09:37.400 [2024-11-15 11:33:02.740903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.740912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.752933] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.752940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.764966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.764973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.771598] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:09:37.400 [2024-11-15 11:33:02.771648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid917892 ] 00:09:37.400 [2024-11-15 11:33:02.776997] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.777004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.789027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.789034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.801058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.801065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.813088] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.813095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.825117] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.825124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.837148] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.837155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.849177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.849184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.853863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.400 [2024-11-15 11:33:02.861209] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.861217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.873240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.873249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.400 [2024-11-15 11:33:02.883314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.400 [2024-11-15 11:33:02.885271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.400 [2024-11-15 11:33:02.885281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.897307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.897317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.909336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.909348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.921366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.921376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.933394] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.933403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.945425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.945432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.957467] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.957483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.969493] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.969508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.981526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.981537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:02.993555] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:02.993566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.005589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.005596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.017618] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.017625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.029650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.029660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.041682] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.041692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.053712] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.053719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.065744] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.065751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.077777] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.077787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.089805] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.089812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.101837] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.101844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.113868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.113875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.125898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.125908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.137928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.137935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.661 [2024-11-15 11:33:03.149959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.661 [2024-11-15 11:33:03.149967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.161992] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.162000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.174025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.174036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.186059] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.186075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 Running I/O for 5 seconds... 00:09:37.923 [2024-11-15 11:33:03.200917] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.200937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.214761] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.214778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.227287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.227302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.240695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.240711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.254048] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.254064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.266526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.266542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.279743] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.279758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.292902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.292917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.305611] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.305627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.318815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.318830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.332543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.332558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.345769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.345784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.358160] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.358175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.370633] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.370648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.384084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.384099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.397009] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.397024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:37.923 [2024-11-15 11:33:03.409676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:37.923 [2024-11-15 11:33:03.409690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.423225] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.423239] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.436376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.436390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.449092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.449107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.462483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.462497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.474827] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.474842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.488345] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.488359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.501582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.501596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.514456] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.514471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.527570] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.527585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.540299] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.540314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.553951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.553965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.566686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.566701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.580278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.580293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.593337] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.593351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.605932] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.605947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.618231] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.618246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.632137] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.632152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.644960] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.644975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.657431] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.657446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.185 [2024-11-15 11:33:03.671103] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.185 [2024-11-15 11:33:03.671118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.684193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.684207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.697496] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.697510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.710480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.710495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.723193] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.723208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.736676] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.736691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.749753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.749768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.763479] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.763494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.777205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.777220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.790101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.790116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.802833] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.802848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.816590] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.816605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.829360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.829375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.841816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.841831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.854272] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.854287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.867566] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.867581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.880851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.880866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.893634] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.893656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.906816] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.906830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.920163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.920178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.446 [2024-11-15 11:33:03.933705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.446 [2024-11-15 11:33:03.933719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:03.946738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:03.946753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:03.959230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:03.959245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:03.971888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:03.971903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:03.985355] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:03.985369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:03.998466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:03.998480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.011861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.011875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.025419] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.025433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.038895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.038910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.052258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.052272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.065142] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.065157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.078466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.078480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.091528] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.091542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.105164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.105179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.118548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.118566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.132125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.132139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.144645] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.144660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.157166] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.157181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.171008] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.171022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 [2024-11-15 11:33:04.184323] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.184341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.707 19042.00 IOPS, 148.77 MiB/s [2024-11-15T10:33:04.205Z] [2024-11-15 11:33:04.197138] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.707 [2024-11-15 11:33:04.197153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.210416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.210431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.222850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.222866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.235808] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.235824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.249485] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.249499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.262196] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.262210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.275648] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.275662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.288943] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.288957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.302250] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.302264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.315897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.315912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.328868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.328882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.342261] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.342275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.355521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.355535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.368284] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.368298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.381577] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.381592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.394145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.394160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.406294] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.406309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.418763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.418777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.432202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.432220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.445866] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.445880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:38.969 [2024-11-15 11:33:04.458928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:38.969 [2024-11-15 11:33:04.458943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.471956] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.471971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.485342] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.485356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.497964] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.497978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.511561] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.511579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.524869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.524884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.538129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.538143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.551531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.551546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.564869] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.564883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.577728] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.577743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.591406] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.591420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.604094] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.604109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.617151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.617166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.630782] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.630797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.643630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.643645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.656607] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.656622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.669575] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.669590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.683011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.683029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.695954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.695968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.709724] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.709738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.230 [2024-11-15 11:33:04.722667] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.230 [2024-11-15 11:33:04.722682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.736176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.490 [2024-11-15 11:33:04.736192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.749482] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.490 [2024-11-15 11:33:04.749497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.762604] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.490 [2024-11-15 11:33:04.762619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.775447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.490 [2024-11-15 11:33:04.775461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.788800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.490 [2024-11-15 11:33:04.788814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.801913] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.490 [2024-11-15 11:33:04.801928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.490 [2024-11-15 11:33:04.815289] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.815304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.828903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.828917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.842686] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.842701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.855985] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.856000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.869122] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.869136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.882614] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.882628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.895882] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.895896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.908983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.908997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.922245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.922260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.935147] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.935161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.948748] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.948762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.962022] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.962036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.491 [2024-11-15 11:33:04.974784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.491 [2024-11-15 11:33:04.974798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.751 [2024-11-15 11:33:04.988232] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.751 [2024-11-15 11:33:04.988247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.751 [2024-11-15 11:33:05.001410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.751 [2024-11-15 11:33:05.001425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.014661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.014676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.027181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.027195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.039847] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.039862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.053395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.053410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.066033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.066047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.079321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.079336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.092787] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.092801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.106074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.106088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.118862] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.118877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.132125] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.132139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.145534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.145549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.158613] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.158628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.171391] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.171406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.184004] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.184019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 19119.00 IOPS, 149.37 MiB/s [2024-11-15T10:33:05.250Z] [2024-11-15 11:33:05.196935] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.196950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.209661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.209676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.223208] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.223223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:39.752 [2024-11-15 11:33:05.236732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:39.752 [2024-11-15 11:33:05.236746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.249339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.249354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.262823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.262838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.275841] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.275855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.288068] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.288082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.301044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.301058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.314070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.314085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.327420] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.327435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.340266] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.340281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.353637] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.353651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.367181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.367196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.380369] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.380384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.394067] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.394082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.407246] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.407261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.420124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.420139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.432416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.432431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.446197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.446211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.459247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.459262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.472395] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.472410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.486146] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.486161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.013 [2024-11-15 11:33:05.499213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.013 [2024-11-15 11:33:05.499227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.512701] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.512715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.525510] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.525524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.539035] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.539050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.552120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.552135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.564790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.564805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.577257] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.577272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.590075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.590089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.603033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.603048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.616529] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.616545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.628979] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.628994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.642112] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.642127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.655058] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.655072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.668530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.668549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.681278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.681292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.694367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.694382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.707823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.707837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.721321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.721336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.734878] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.734893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.748440] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.748454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.275 [2024-11-15 11:33:05.761248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.275 [2024-11-15 11:33:05.761263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.773897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.773911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.787277] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.787292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.800732] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.800747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.813754] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.813769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.826854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.826868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.840073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.840087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.853549] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.853569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.867131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.867145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.879677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.879691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.893144] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.893159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.906108] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.906124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.919598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.919618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.932644] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.932659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.945662] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.945676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.959385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.959399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.971708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.971722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.985256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.985271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:05.997908] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:05.997922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:06.010499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:06.010513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.536 [2024-11-15 11:33:06.023811] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.536 [2024-11-15 11:33:06.023825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.037229] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.037243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.050531] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.050546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.063753] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.063768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.076438] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.076452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.089735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.089750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.103236] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.103250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.116023] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.116037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.129376] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.129390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.142698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.142712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.156039] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.156054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.169698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.169718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.183115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.183130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 [2024-11-15 11:33:06.196788] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.798 [2024-11-15 11:33:06.196803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.798 19177.67 IOPS, 149.83 MiB/s [2024-11-15T10:33:06.297Z] [2024-11-15 11:33:06.209560] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.209579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-11-15 11:33:06.223025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.223039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-11-15 11:33:06.235810] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.235825] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-11-15 11:33:06.248545] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.248560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-11-15 11:33:06.260922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.260936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-11-15 11:33:06.273679] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.273694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:40.799 [2024-11-15 11:33:06.287168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:40.799 [2024-11-15 11:33:06.287183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.299918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.299933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.312893] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.312908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.326592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.326607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.339930] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.339945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.353071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.353085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.365880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.365894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.378194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.378209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.391465] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.391480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.404169] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.404183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.417348] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.417362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.430589] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.430604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.444176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.444191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.457480] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.457494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.470385] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.470399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.483850] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.483865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.496902] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.496916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.510032] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.510046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.522870] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.522885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.536237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.536251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.060 [2024-11-15 11:33:06.549155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.060 [2024-11-15 11:33:06.549169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.562919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.562934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.576092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.576106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.589466] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.589480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.603080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.603094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.615713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.615727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.628262] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.628276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.640786] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.640800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.654244] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.654259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.667390] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.667404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.680494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.680508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.693713] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.693728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.707037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.707052] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.720525] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.720541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.734011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.734026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.746737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.746751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.760283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.760299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.773880] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.773895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.786484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.786499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.799275] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.799290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.321 [2024-11-15 11:33:06.812471] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.321 [2024-11-15 11:33:06.812486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.825959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.825976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.839464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.839480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.851861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.851876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.865076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.865092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.878091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.878106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.891428] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.891443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.904651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.904666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.917756] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.917770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.931281] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.931295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.944045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.944060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.956790] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.956805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.970003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.970017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.983145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.983160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:06.996011] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:06.996026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:07.008526] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:07.008541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:07.021321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:07.021336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:07.034296] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:07.034311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:07.046794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:07.046809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.583 [2024-11-15 11:33:07.059820] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.583 [2024-11-15 11:33:07.059835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.584 [2024-11-15 11:33:07.072283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.584 [2024-11-15 11:33:07.072298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.084835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.084851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.097398] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.097412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.110746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.110761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.123273] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.123288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.136630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.136645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.149591] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.149605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.162238] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.162252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.175451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.175466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.188730] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.188745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 19186.75 IOPS, 149.90 MiB/s [2024-11-15T10:33:07.343Z] [2024-11-15 11:33:07.202003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.202018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.215490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.215505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.229335] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.229351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.242041] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.242056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.254747] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.254762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.267937] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.267952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.281320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.281335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.294884] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.294899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.308044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.308059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.320705] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.320720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:41.845 [2024-11-15 11:33:07.333464] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:41.845 [2024-11-15 11:33:07.333479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.346036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.346051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.359473] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.359488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.372506] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.372521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.385854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.385869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.399416] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.399435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.412340] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.412355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.424939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.424954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.438307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.438321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.451434] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.451449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.464320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.464334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.476958] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.476972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.489642] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.489656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.502362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.502377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.515738] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.515752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.528922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.528936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.542463] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.542477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.555659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.555673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.568226] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.568240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.581060] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.581075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.106 [2024-11-15 11:33:07.594240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.106 [2024-11-15 11:33:07.594254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.606534] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.606549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.619316] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.619330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.632541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.632555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.645152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.645174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.658582] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.658597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.671735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.671749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.685111] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.685126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.698581] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.698596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.711351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.711365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.724596] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.724610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.737909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.737923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.751403] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.751417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.764764] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.764778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.777680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.777694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.790177] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.790191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.802971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.802986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.815543] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.815557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.828601] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.828615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.842044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.842059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.367 [2024-11-15 11:33:07.855598] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.367 [2024-11-15 11:33:07.855613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.869182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.869197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.882585] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.882599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.895045] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.895063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.907886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.907900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.921260] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.921274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.934792] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.934806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.948387] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.948402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.961486] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.961500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.974974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.974988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:07.987629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:07.987644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.000737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.000752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.013386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.013400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.026755] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.026770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.040091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.040106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.053718] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.053732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.067151] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.067165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.079915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.079930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.092288] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.092302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.105028] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.105042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.630 [2024-11-15 11:33:08.118258] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.630 [2024-11-15 11:33:08.118272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.131687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.131702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.145024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.145038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.158326] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.158340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.171758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.171773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.184524] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.184538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.197962] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.197977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 19184.00 IOPS, 149.88 MiB/s [2024-11-15T10:33:08.389Z] [2024-11-15 11:33:08.207950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.207963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 00:09:42.891 Latency(us) 00:09:42.891 [2024-11-15T10:33:08.389Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.891 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:42.891 Nvme1n1 : 5.01 19188.51 149.91 0.00 0.00 6664.55 2812.59 16274.77 00:09:42.891 [2024-11-15T10:33:08.389Z] =================================================================================================================== 00:09:42.891 [2024-11-15T10:33:08.389Z] Total : 19188.51 149.91 0.00 0.00 6664.55 2812.59 16274.77 00:09:42.891 [2024-11-15 11:33:08.219978] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.219991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.232014] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.232026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.244043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.244054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.256072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.256084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.268101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.268110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.280131] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.280139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.292165] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.292176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 [2024-11-15 11:33:08.304192] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.891 [2024-11-15 11:33:08.304200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (917892) - No such process 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 917892 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.891 delay0 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.891 11:33:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:43.153 [2024-11-15 11:33:08.436115] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:51.295 Initializing NVMe Controllers 00:09:51.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:51.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:51.295 Initialization complete. Launching workers. 00:09:51.295 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 8282 00:09:51.295 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 8563, failed to submit 39 00:09:51.295 success 8412, unsuccessful 151, failed 0 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:51.295 rmmod nvme_tcp 00:09:51.295 rmmod nvme_fabrics 00:09:51.295 rmmod nvme_keyring 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 915422 ']' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 915422 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 915422 ']' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 915422 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 915422 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 915422' 00:09:51.295 killing process with pid 915422 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 915422 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 915422 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.295 11:33:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:52.681 00:09:52.681 real 0m34.439s 00:09:52.681 user 0m45.728s 00:09:52.681 sys 0m11.673s 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.681 ************************************ 00:09:52.681 END TEST nvmf_zcopy 00:09:52.681 ************************************ 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:52.681 ************************************ 00:09:52.681 START TEST nvmf_nmic 00:09:52.681 ************************************ 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:52.681 * Looking for test storage... 00:09:52.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:52.681 11:33:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:52.681 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.682 --rc genhtml_branch_coverage=1 00:09:52.682 --rc genhtml_function_coverage=1 00:09:52.682 --rc genhtml_legend=1 00:09:52.682 --rc geninfo_all_blocks=1 00:09:52.682 --rc geninfo_unexecuted_blocks=1 00:09:52.682 00:09:52.682 ' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.682 --rc genhtml_branch_coverage=1 00:09:52.682 --rc genhtml_function_coverage=1 00:09:52.682 --rc genhtml_legend=1 00:09:52.682 --rc geninfo_all_blocks=1 00:09:52.682 --rc geninfo_unexecuted_blocks=1 00:09:52.682 00:09:52.682 ' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.682 --rc genhtml_branch_coverage=1 00:09:52.682 --rc genhtml_function_coverage=1 00:09:52.682 --rc genhtml_legend=1 00:09:52.682 --rc geninfo_all_blocks=1 00:09:52.682 --rc geninfo_unexecuted_blocks=1 00:09:52.682 00:09:52.682 ' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:52.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.682 --rc genhtml_branch_coverage=1 00:09:52.682 --rc genhtml_function_coverage=1 00:09:52.682 --rc genhtml_legend=1 00:09:52.682 --rc geninfo_all_blocks=1 00:09:52.682 --rc geninfo_unexecuted_blocks=1 00:09:52.682 00:09:52.682 ' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:52.682 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:52.682 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:52.683 11:33:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:00.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:00.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:00.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:00.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:00.829 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:00.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.565 ms 00:10:00.829 00:10:00.829 --- 10.0.0.2 ping statistics --- 00:10:00.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.829 rtt min/avg/max/mdev = 0.565/0.565/0.565/0.000 ms 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:10:00.830 00:10:00.830 --- 10.0.0.1 ping statistics --- 00:10:00.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.830 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=925044 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 925044 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 925044 ']' 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:00.830 11:33:25 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:00.830 [2024-11-15 11:33:25.667409] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:00.830 [2024-11-15 11:33:25.667484] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.830 [2024-11-15 11:33:25.770316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:00.830 [2024-11-15 11:33:25.825870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:00.830 [2024-11-15 11:33:25.825928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:00.830 [2024-11-15 11:33:25.825940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:00.830 [2024-11-15 11:33:25.825948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:00.830 [2024-11-15 11:33:25.825954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:00.830 [2024-11-15 11:33:25.828154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:00.830 [2024-11-15 11:33:25.828287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:00.830 [2024-11-15 11:33:25.828448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:00.830 [2024-11-15 11:33:25.828448] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.092 [2024-11-15 11:33:26.550689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.092 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 Malloc0 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 [2024-11-15 11:33:26.625336] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:01.353 test case1: single bdev can't be used in multiple subsystems 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 [2024-11-15 11:33:26.661134] bdev.c:8502:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:01.353 [2024-11-15 11:33:26.661164] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:01.353 [2024-11-15 11:33:26.661173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.353 request: 00:10:01.353 { 00:10:01.353 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:01.353 "namespace": { 00:10:01.353 "bdev_name": "Malloc0", 00:10:01.353 "no_auto_visible": false, 00:10:01.353 "no_metadata": false 00:10:01.353 }, 00:10:01.353 "method": "nvmf_subsystem_add_ns", 00:10:01.353 "req_id": 1 00:10:01.353 } 00:10:01.353 Got JSON-RPC error response 00:10:01.353 response: 00:10:01.353 { 00:10:01.353 "code": -32602, 00:10:01.353 "message": "Invalid parameters" 00:10:01.353 } 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:01.353 Adding namespace failed - expected result. 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:01.353 test case2: host connect to nvmf target in multiple paths 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:01.353 [2024-11-15 11:33:26.673354] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.353 11:33:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:02.738 11:33:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:04.651 11:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.651 11:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:10:04.651 11:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.651 11:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:04.651 11:33:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:10:06.575 11:33:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:06.575 [global] 00:10:06.575 thread=1 00:10:06.575 invalidate=1 00:10:06.575 rw=write 00:10:06.575 time_based=1 00:10:06.575 runtime=1 00:10:06.575 ioengine=libaio 00:10:06.575 direct=1 00:10:06.575 bs=4096 00:10:06.575 iodepth=1 00:10:06.575 norandommap=0 00:10:06.575 numjobs=1 00:10:06.575 00:10:06.575 verify_dump=1 00:10:06.575 verify_backlog=512 00:10:06.575 verify_state_save=0 00:10:06.575 do_verify=1 00:10:06.575 verify=crc32c-intel 00:10:06.575 [job0] 00:10:06.575 filename=/dev/nvme0n1 00:10:06.575 Could not set queue depth (nvme0n1) 00:10:06.835 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:06.835 fio-3.35 00:10:06.835 Starting 1 thread 00:10:08.221 00:10:08.221 job0: (groupid=0, jobs=1): err= 0: pid=926592: Fri Nov 15 11:33:33 2024 00:10:08.221 read: IOPS=18, BW=73.7KiB/s (75.5kB/s)(76.0KiB/1031msec) 00:10:08.221 slat (nsec): min=6855, max=28652, avg=25154.05, stdev=5841.36 00:10:08.221 clat (usec): min=709, max=41998, avg=37226.05, stdev=12841.79 00:10:08.221 lat (usec): min=737, max=42024, avg=37251.20, stdev=12843.83 00:10:08.221 clat percentiles (usec): 00:10:08.221 | 1.00th=[ 709], 5.00th=[ 709], 10.00th=[ 898], 20.00th=[41157], 00:10:08.221 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:08.221 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:08.221 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:08.221 | 99.99th=[42206] 00:10:08.221 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:10:08.221 slat (nsec): min=9108, max=55425, avg=29873.25, stdev=9582.32 00:10:08.221 clat (usec): min=214, max=1079, avg=594.87, stdev=117.89 00:10:08.221 lat (usec): min=249, max=1112, avg=624.74, stdev=122.17 00:10:08.221 clat percentiles (usec): 00:10:08.221 | 1.00th=[ 338], 5.00th=[ 392], 10.00th=[ 441], 20.00th=[ 494], 00:10:08.221 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 619], 00:10:08.221 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 758], 00:10:08.221 | 99.00th=[ 996], 99.50th=[ 1004], 99.90th=[ 1074], 99.95th=[ 1074], 00:10:08.221 | 99.99th=[ 1074] 00:10:08.221 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:08.221 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:08.221 lat (usec) : 250=0.19%, 500=21.28%, 750=68.93%, 1000=5.84% 00:10:08.221 lat (msec) : 2=0.56%, 50=3.20% 00:10:08.221 cpu : usr=1.75%, sys=1.17%, ctx=531, majf=0, minf=1 00:10:08.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.221 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.221 00:10:08.221 Run status group 0 (all jobs): 00:10:08.221 READ: bw=73.7KiB/s (75.5kB/s), 73.7KiB/s-73.7KiB/s (75.5kB/s-75.5kB/s), io=76.0KiB (77.8kB), run=1031-1031msec 00:10:08.221 WRITE: bw=1986KiB/s (2034kB/s), 1986KiB/s-1986KiB/s (2034kB/s-2034kB/s), io=2048KiB (2097kB), run=1031-1031msec 00:10:08.221 00:10:08.221 Disk stats (read/write): 00:10:08.221 nvme0n1: ios=65/512, merge=0/0, ticks=597/238, in_queue=835, util=93.59% 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:08.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.221 rmmod nvme_tcp 00:10:08.221 rmmod nvme_fabrics 00:10:08.221 rmmod nvme_keyring 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 925044 ']' 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 925044 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 925044 ']' 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 925044 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 925044 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 925044' 00:10:08.221 killing process with pid 925044 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 925044 00:10:08.221 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 925044 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.483 11:33:33 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.395 11:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.395 00:10:10.395 real 0m17.962s 00:10:10.395 user 0m47.271s 00:10:10.395 sys 0m6.595s 00:10:10.395 11:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:10.395 11:33:35 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:10.395 ************************************ 00:10:10.395 END TEST nvmf_nmic 00:10:10.395 ************************************ 00:10:10.396 11:33:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:10.396 11:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:10.396 11:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:10.396 11:33:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.658 ************************************ 00:10:10.658 START TEST nvmf_fio_target 00:10:10.658 ************************************ 00:10:10.658 11:33:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:10.658 * Looking for test storage... 00:10:10.658 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:10.658 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.659 --rc genhtml_branch_coverage=1 00:10:10.659 --rc genhtml_function_coverage=1 00:10:10.659 --rc genhtml_legend=1 00:10:10.659 --rc geninfo_all_blocks=1 00:10:10.659 --rc geninfo_unexecuted_blocks=1 00:10:10.659 00:10:10.659 ' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.659 --rc genhtml_branch_coverage=1 00:10:10.659 --rc genhtml_function_coverage=1 00:10:10.659 --rc genhtml_legend=1 00:10:10.659 --rc geninfo_all_blocks=1 00:10:10.659 --rc geninfo_unexecuted_blocks=1 00:10:10.659 00:10:10.659 ' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.659 --rc genhtml_branch_coverage=1 00:10:10.659 --rc genhtml_function_coverage=1 00:10:10.659 --rc genhtml_legend=1 00:10:10.659 --rc geninfo_all_blocks=1 00:10:10.659 --rc geninfo_unexecuted_blocks=1 00:10:10.659 00:10:10.659 ' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:10.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.659 --rc genhtml_branch_coverage=1 00:10:10.659 --rc genhtml_function_coverage=1 00:10:10.659 --rc genhtml_legend=1 00:10:10.659 --rc geninfo_all_blocks=1 00:10:10.659 --rc geninfo_unexecuted_blocks=1 00:10:10.659 00:10:10.659 ' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:10.659 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.660 11:33:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:18.808 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:18.809 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:18.809 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:18.809 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:18.809 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.809 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:10:18.810 00:10:18.810 --- 10.0.0.2 ping statistics --- 00:10:18.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.810 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:10:18.810 00:10:18.810 --- 10.0.0.1 ping statistics --- 00:10:18.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.810 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=931202 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 931202 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 931202 ']' 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:18.810 11:33:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.810 [2024-11-15 11:33:43.763606] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:18.810 [2024-11-15 11:33:43.763671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.810 [2024-11-15 11:33:43.862337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.810 [2024-11-15 11:33:43.916489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.810 [2024-11-15 11:33:43.916542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.810 [2024-11-15 11:33:43.916551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.810 [2024-11-15 11:33:43.916558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.810 [2024-11-15 11:33:43.916574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.810 [2024-11-15 11:33:43.918779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.810 [2024-11-15 11:33:43.919009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.810 [2024-11-15 11:33:43.919175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.810 [2024-11-15 11:33:43.919178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:19.383 [2024-11-15 11:33:44.793544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:19.383 11:33:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.645 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:19.645 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.906 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:19.906 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.167 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:20.167 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.428 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:20.428 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:20.428 11:33:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.689 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:20.689 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:20.950 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:20.950 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:21.211 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:21.211 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:21.473 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:21.473 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.473 11:33:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:21.733 11:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:21.733 11:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:21.994 11:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.994 [2024-11-15 11:33:47.485784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:22.254 11:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:22.254 11:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:22.515 11:33:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:23.901 11:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:23.901 11:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:10:23.901 11:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:23.901 11:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:10:23.901 11:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:10:23.901 11:33:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:10:26.446 11:33:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:26.446 [global] 00:10:26.446 thread=1 00:10:26.446 invalidate=1 00:10:26.446 rw=write 00:10:26.446 time_based=1 00:10:26.446 runtime=1 00:10:26.446 ioengine=libaio 00:10:26.446 direct=1 00:10:26.446 bs=4096 00:10:26.446 iodepth=1 00:10:26.446 norandommap=0 00:10:26.446 numjobs=1 00:10:26.446 00:10:26.446 verify_dump=1 00:10:26.446 verify_backlog=512 00:10:26.446 verify_state_save=0 00:10:26.446 do_verify=1 00:10:26.446 verify=crc32c-intel 00:10:26.446 [job0] 00:10:26.446 filename=/dev/nvme0n1 00:10:26.446 [job1] 00:10:26.446 filename=/dev/nvme0n2 00:10:26.446 [job2] 00:10:26.446 filename=/dev/nvme0n3 00:10:26.446 [job3] 00:10:26.446 filename=/dev/nvme0n4 00:10:26.446 Could not set queue depth (nvme0n1) 00:10:26.446 Could not set queue depth (nvme0n2) 00:10:26.446 Could not set queue depth (nvme0n3) 00:10:26.446 Could not set queue depth (nvme0n4) 00:10:26.446 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.446 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.446 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.446 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.446 fio-3.35 00:10:26.446 Starting 4 threads 00:10:27.829 00:10:27.829 job0: (groupid=0, jobs=1): err= 0: pid=932885: Fri Nov 15 11:33:53 2024 00:10:27.829 read: IOPS=524, BW=2098KiB/s (2148kB/s)(2100KiB/1001msec) 00:10:27.829 slat (nsec): min=4734, max=31584, avg=12947.86, stdev=9192.79 00:10:27.829 clat (usec): min=238, max=1621, avg=840.68, stdev=109.71 00:10:27.829 lat (usec): min=245, max=1627, avg=853.63, stdev=116.50 00:10:27.829 clat percentiles (usec): 00:10:27.829 | 1.00th=[ 652], 5.00th=[ 701], 10.00th=[ 734], 20.00th=[ 758], 00:10:27.829 | 30.00th=[ 783], 40.00th=[ 799], 50.00th=[ 816], 60.00th=[ 840], 00:10:27.829 | 70.00th=[ 881], 80.00th=[ 938], 90.00th=[ 996], 95.00th=[ 1012], 00:10:27.829 | 99.00th=[ 1074], 99.50th=[ 1172], 99.90th=[ 1614], 99.95th=[ 1614], 00:10:27.829 | 99.99th=[ 1614] 00:10:27.829 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:27.829 slat (nsec): min=5544, max=52027, avg=18062.54, stdev=13294.56 00:10:27.829 clat (usec): min=213, max=1353, avg=515.69, stdev=131.08 00:10:27.829 lat (usec): min=220, max=1392, avg=533.75, stdev=140.81 00:10:27.829 clat percentiles (usec): 00:10:27.829 | 1.00th=[ 285], 5.00th=[ 338], 10.00th=[ 359], 20.00th=[ 412], 00:10:27.829 | 30.00th=[ 437], 40.00th=[ 457], 50.00th=[ 486], 60.00th=[ 523], 00:10:27.829 | 70.00th=[ 586], 80.00th=[ 644], 90.00th=[ 709], 95.00th=[ 742], 00:10:27.829 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 1074], 99.95th=[ 1352], 00:10:27.829 | 99.99th=[ 1352] 00:10:27.829 bw ( KiB/s): min= 4096, max= 4096, per=34.30%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.829 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.829 lat (usec) : 250=0.39%, 500=36.67%, 750=31.38%, 1000=28.47% 00:10:27.830 lat (msec) : 2=3.10% 00:10:27.830 cpu : usr=1.20%, sys=3.80%, ctx=1549, majf=0, minf=2 00:10:27.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.830 job1: (groupid=0, jobs=1): err= 0: pid=932893: Fri Nov 15 11:33:53 2024 00:10:27.830 read: IOPS=527, BW=2110KiB/s (2161kB/s)(2112KiB/1001msec) 00:10:27.830 slat (nsec): min=7162, max=56946, avg=25971.11, stdev=6024.40 00:10:27.830 clat (usec): min=344, max=1007, avg=722.99, stdev=120.44 00:10:27.830 lat (usec): min=371, max=1034, avg=748.96, stdev=120.58 00:10:27.830 clat percentiles (usec): 00:10:27.830 | 1.00th=[ 433], 5.00th=[ 537], 10.00th=[ 578], 20.00th=[ 611], 00:10:27.830 | 30.00th=[ 635], 40.00th=[ 668], 50.00th=[ 750], 60.00th=[ 783], 00:10:27.830 | 70.00th=[ 807], 80.00th=[ 832], 90.00th=[ 865], 95.00th=[ 898], 00:10:27.830 | 99.00th=[ 955], 99.50th=[ 971], 99.90th=[ 1004], 99.95th=[ 1004], 00:10:27.830 | 99.99th=[ 1004] 00:10:27.830 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:27.830 slat (usec): min=10, max=3677, avg=34.66, stdev=114.37 00:10:27.830 clat (usec): min=146, max=800, avg=543.79, stdev=90.46 00:10:27.830 lat (usec): min=158, max=4352, avg=578.44, stdev=147.97 00:10:27.830 clat percentiles (usec): 00:10:27.830 | 1.00th=[ 297], 5.00th=[ 371], 10.00th=[ 404], 20.00th=[ 478], 00:10:27.830 | 30.00th=[ 515], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 578], 00:10:27.830 | 70.00th=[ 594], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:10:27.830 | 99.00th=[ 742], 99.50th=[ 742], 99.90th=[ 783], 99.95th=[ 799], 00:10:27.830 | 99.99th=[ 799] 00:10:27.830 bw ( KiB/s): min= 4104, max= 4104, per=34.37%, avg=4104.00, stdev= 0.00, samples=1 00:10:27.830 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:10:27.830 lat (usec) : 250=0.26%, 500=17.33%, 750=64.95%, 1000=17.40% 00:10:27.830 lat (msec) : 2=0.06% 00:10:27.830 cpu : usr=2.40%, sys=4.50%, ctx=1557, majf=0, minf=1 00:10:27.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.830 job2: (groupid=0, jobs=1): err= 0: pid=932906: Fri Nov 15 11:33:53 2024 00:10:27.830 read: IOPS=17, BW=70.6KiB/s (72.3kB/s)(72.0KiB/1020msec) 00:10:27.830 slat (nsec): min=27042, max=27863, avg=27314.83, stdev=195.45 00:10:27.830 clat (usec): min=1262, max=42024, avg=39238.12, stdev=9489.16 00:10:27.830 lat (usec): min=1289, max=42051, avg=39265.44, stdev=9489.14 00:10:27.830 clat percentiles (usec): 00:10:27.830 | 1.00th=[ 1270], 5.00th=[ 1270], 10.00th=[40633], 20.00th=[41157], 00:10:27.830 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:10:27.830 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:27.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.830 | 99.99th=[42206] 00:10:27.830 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:10:27.830 slat (nsec): min=9506, max=64426, avg=31266.16, stdev=9723.95 00:10:27.830 clat (usec): min=130, max=920, avg=572.99, stdev=133.04 00:10:27.830 lat (usec): min=141, max=955, avg=604.26, stdev=137.14 00:10:27.830 clat percentiles (usec): 00:10:27.830 | 1.00th=[ 204], 5.00th=[ 343], 10.00th=[ 400], 20.00th=[ 465], 00:10:27.830 | 30.00th=[ 510], 40.00th=[ 545], 50.00th=[ 578], 60.00th=[ 611], 00:10:27.830 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 791], 00:10:27.830 | 99.00th=[ 848], 99.50th=[ 857], 99.90th=[ 922], 99.95th=[ 922], 00:10:27.830 | 99.99th=[ 922] 00:10:27.830 bw ( KiB/s): min= 4104, max= 4104, per=34.37%, avg=4104.00, stdev= 0.00, samples=1 00:10:27.830 iops : min= 1026, max= 1026, avg=1026.00, stdev= 0.00, samples=1 00:10:27.830 lat (usec) : 250=1.13%, 500=25.28%, 750=62.08%, 1000=8.11% 00:10:27.830 lat (msec) : 2=0.19%, 50=3.21% 00:10:27.830 cpu : usr=1.08%, sys=1.86%, ctx=530, majf=0, minf=1 00:10:27.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.830 job3: (groupid=0, jobs=1): err= 0: pid=932913: Fri Nov 15 11:33:53 2024 00:10:27.830 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:10:27.830 slat (nsec): min=9052, max=32195, avg=27075.82, stdev=4760.08 00:10:27.830 clat (usec): min=1071, max=42156, avg=39516.97, stdev=9908.45 00:10:27.830 lat (usec): min=1103, max=42184, avg=39544.04, stdev=9907.19 00:10:27.830 clat percentiles (usec): 00:10:27.830 | 1.00th=[ 1074], 5.00th=[ 1074], 10.00th=[41157], 20.00th=[41681], 00:10:27.830 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:27.830 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:27.830 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:27.830 | 99.99th=[42206] 00:10:27.830 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:10:27.830 slat (nsec): min=9958, max=69204, avg=34887.95, stdev=10189.84 00:10:27.830 clat (usec): min=319, max=1062, avg=653.05, stdev=124.21 00:10:27.830 lat (usec): min=331, max=1102, avg=687.94, stdev=127.47 00:10:27.830 clat percentiles (usec): 00:10:27.830 | 1.00th=[ 367], 5.00th=[ 408], 10.00th=[ 486], 20.00th=[ 553], 00:10:27.830 | 30.00th=[ 594], 40.00th=[ 627], 50.00th=[ 660], 60.00th=[ 685], 00:10:27.830 | 70.00th=[ 717], 80.00th=[ 758], 90.00th=[ 799], 95.00th=[ 840], 00:10:27.830 | 99.00th=[ 914], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:10:27.830 | 99.99th=[ 1057] 00:10:27.830 bw ( KiB/s): min= 4096, max= 4096, per=34.30%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.830 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.830 lat (usec) : 500=10.59%, 750=64.27%, 1000=21.74% 00:10:27.830 lat (msec) : 2=0.38%, 50=3.02% 00:10:27.830 cpu : usr=0.97%, sys=2.33%, ctx=533, majf=0, minf=1 00:10:27.830 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.830 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.830 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.830 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.830 00:10:27.830 Run status group 0 (all jobs): 00:10:27.830 READ: bw=4229KiB/s (4331kB/s), 66.1KiB/s-2110KiB/s (67.7kB/s-2161kB/s), io=4352KiB (4456kB), run=1001-1029msec 00:10:27.830 WRITE: bw=11.7MiB/s (12.2MB/s), 1990KiB/s-4092KiB/s (2038kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1029msec 00:10:27.830 00:10:27.830 Disk stats (read/write): 00:10:27.830 nvme0n1: ios=562/668, merge=0/0, ticks=483/305, in_queue=788, util=87.07% 00:10:27.830 nvme0n2: ios=565/760, merge=0/0, ticks=732/398, in_queue=1130, util=88.46% 00:10:27.830 nvme0n3: ios=70/512, merge=0/0, ticks=602/231, in_queue=833, util=94.81% 00:10:27.830 nvme0n4: ios=75/512, merge=0/0, ticks=1075/265, in_queue=1340, util=97.00% 00:10:27.830 11:33:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:27.830 [global] 00:10:27.830 thread=1 00:10:27.830 invalidate=1 00:10:27.830 rw=randwrite 00:10:27.830 time_based=1 00:10:27.830 runtime=1 00:10:27.830 ioengine=libaio 00:10:27.830 direct=1 00:10:27.830 bs=4096 00:10:27.830 iodepth=1 00:10:27.830 norandommap=0 00:10:27.830 numjobs=1 00:10:27.830 00:10:27.830 verify_dump=1 00:10:27.830 verify_backlog=512 00:10:27.830 verify_state_save=0 00:10:27.830 do_verify=1 00:10:27.830 verify=crc32c-intel 00:10:27.830 [job0] 00:10:27.830 filename=/dev/nvme0n1 00:10:27.830 [job1] 00:10:27.830 filename=/dev/nvme0n2 00:10:27.830 [job2] 00:10:27.830 filename=/dev/nvme0n3 00:10:27.830 [job3] 00:10:27.830 filename=/dev/nvme0n4 00:10:27.830 Could not set queue depth (nvme0n1) 00:10:27.830 Could not set queue depth (nvme0n2) 00:10:27.830 Could not set queue depth (nvme0n3) 00:10:27.830 Could not set queue depth (nvme0n4) 00:10:28.091 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.091 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.091 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.091 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.091 fio-3.35 00:10:28.091 Starting 4 threads 00:10:29.475 00:10:29.475 job0: (groupid=0, jobs=1): err= 0: pid=933412: Fri Nov 15 11:33:54 2024 00:10:29.475 read: IOPS=18, BW=74.3KiB/s (76.1kB/s)(76.0KiB/1023msec) 00:10:29.475 slat (nsec): min=26683, max=27409, avg=26982.42, stdev=168.37 00:10:29.475 clat (usec): min=40742, max=41033, avg=40951.43, stdev=66.43 00:10:29.475 lat (usec): min=40769, max=41060, avg=40978.42, stdev=66.42 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:29.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:29.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:29.475 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:29.475 | 99.99th=[41157] 00:10:29.475 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:10:29.475 slat (nsec): min=9822, max=53857, avg=30726.58, stdev=9047.49 00:10:29.475 clat (usec): min=150, max=806, avg=436.64, stdev=109.91 00:10:29.475 lat (usec): min=161, max=840, avg=467.37, stdev=113.01 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[ 208], 5.00th=[ 277], 10.00th=[ 302], 20.00th=[ 343], 00:10:29.475 | 30.00th=[ 371], 40.00th=[ 396], 50.00th=[ 424], 60.00th=[ 457], 00:10:29.475 | 70.00th=[ 502], 80.00th=[ 529], 90.00th=[ 578], 95.00th=[ 635], 00:10:29.475 | 99.00th=[ 701], 99.50th=[ 734], 99.90th=[ 807], 99.95th=[ 807], 00:10:29.475 | 99.99th=[ 807] 00:10:29.475 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.475 lat (usec) : 250=2.82%, 500=64.60%, 750=28.63%, 1000=0.38% 00:10:29.475 lat (msec) : 50=3.58% 00:10:29.475 cpu : usr=0.98%, sys=1.27%, ctx=533, majf=0, minf=1 00:10:29.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.475 job1: (groupid=0, jobs=1): err= 0: pid=933414: Fri Nov 15 11:33:54 2024 00:10:29.475 read: IOPS=18, BW=74.1KiB/s (75.9kB/s)(76.0KiB/1025msec) 00:10:29.475 slat (nsec): min=24417, max=27324, avg=26734.42, stdev=607.35 00:10:29.475 clat (usec): min=40799, max=41667, avg=41058.03, stdev=238.12 00:10:29.475 lat (usec): min=40826, max=41694, avg=41084.76, stdev=238.02 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:10:29.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:29.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:10:29.475 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:29.475 | 99.99th=[41681] 00:10:29.475 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:10:29.475 slat (nsec): min=9457, max=50627, avg=28472.78, stdev=10441.04 00:10:29.475 clat (usec): min=109, max=919, avg=438.60, stdev=150.25 00:10:29.475 lat (usec): min=119, max=954, avg=467.07, stdev=152.87 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[ 114], 5.00th=[ 133], 10.00th=[ 247], 20.00th=[ 330], 00:10:29.475 | 30.00th=[ 363], 40.00th=[ 392], 50.00th=[ 420], 60.00th=[ 469], 00:10:29.475 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 635], 95.00th=[ 668], 00:10:29.475 | 99.00th=[ 783], 99.50th=[ 848], 99.90th=[ 922], 99.95th=[ 922], 00:10:29.475 | 99.99th=[ 922] 00:10:29.475 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.475 lat (usec) : 250=9.79%, 500=53.11%, 750=32.02%, 1000=1.51% 00:10:29.475 lat (msec) : 50=3.58% 00:10:29.475 cpu : usr=0.29%, sys=1.95%, ctx=534, majf=0, minf=1 00:10:29.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.475 job2: (groupid=0, jobs=1): err= 0: pid=933432: Fri Nov 15 11:33:54 2024 00:10:29.475 read: IOPS=708, BW=2833KiB/s (2901kB/s)(2836KiB/1001msec) 00:10:29.475 slat (nsec): min=6835, max=63523, avg=25492.10, stdev=8754.43 00:10:29.475 clat (usec): min=324, max=911, avg=649.94, stdev=98.85 00:10:29.475 lat (usec): min=332, max=939, avg=675.44, stdev=101.19 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[ 404], 5.00th=[ 494], 10.00th=[ 515], 20.00th=[ 578], 00:10:29.475 | 30.00th=[ 603], 40.00th=[ 619], 50.00th=[ 652], 60.00th=[ 685], 00:10:29.475 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 799], 00:10:29.475 | 99.00th=[ 848], 99.50th=[ 873], 99.90th=[ 914], 99.95th=[ 914], 00:10:29.475 | 99.99th=[ 914] 00:10:29.475 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:10:29.475 slat (nsec): min=9483, max=65719, avg=32875.09, stdev=8793.55 00:10:29.475 clat (usec): min=128, max=820, avg=461.62, stdev=116.32 00:10:29.475 lat (usec): min=138, max=870, avg=494.49, stdev=119.86 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[ 178], 5.00th=[ 269], 10.00th=[ 322], 20.00th=[ 363], 00:10:29.475 | 30.00th=[ 404], 40.00th=[ 437], 50.00th=[ 457], 60.00th=[ 486], 00:10:29.475 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 611], 95.00th=[ 660], 00:10:29.475 | 99.00th=[ 750], 99.50th=[ 758], 99.90th=[ 816], 99.95th=[ 824], 00:10:29.475 | 99.99th=[ 824] 00:10:29.475 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.475 lat (usec) : 250=2.25%, 500=38.03%, 750=52.51%, 1000=7.21% 00:10:29.475 cpu : usr=3.50%, sys=6.90%, ctx=1735, majf=0, minf=1 00:10:29.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 issued rwts: total=709,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.475 job3: (groupid=0, jobs=1): err= 0: pid=933439: Fri Nov 15 11:33:54 2024 00:10:29.475 read: IOPS=18, BW=74.9KiB/s (76.7kB/s)(76.0KiB/1015msec) 00:10:29.475 slat (nsec): min=10779, max=28373, avg=27005.95, stdev=3932.34 00:10:29.475 clat (usec): min=40915, max=41018, avg=40962.00, stdev=29.43 00:10:29.475 lat (usec): min=40942, max=41033, avg=40989.01, stdev=27.80 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:29.475 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:29.475 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:29.475 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:29.475 | 99.99th=[41157] 00:10:29.475 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:29.475 slat (nsec): min=9986, max=52478, avg=27911.75, stdev=11121.39 00:10:29.475 clat (usec): min=209, max=615, avg=422.68, stdev=85.68 00:10:29.475 lat (usec): min=243, max=649, avg=450.59, stdev=90.71 00:10:29.475 clat percentiles (usec): 00:10:29.475 | 1.00th=[ 237], 5.00th=[ 277], 10.00th=[ 297], 20.00th=[ 343], 00:10:29.475 | 30.00th=[ 367], 40.00th=[ 408], 50.00th=[ 441], 60.00th=[ 461], 00:10:29.475 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 519], 95.00th=[ 553], 00:10:29.475 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 619], 99.95th=[ 619], 00:10:29.475 | 99.99th=[ 619] 00:10:29.475 bw ( KiB/s): min= 4096, max= 4096, per=41.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:29.475 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:29.475 lat (usec) : 250=2.26%, 500=77.40%, 750=16.76% 00:10:29.475 lat (msec) : 50=3.58% 00:10:29.475 cpu : usr=0.69%, sys=1.38%, ctx=532, majf=0, minf=1 00:10:29.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.475 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.475 00:10:29.475 Run status group 0 (all jobs): 00:10:29.475 READ: bw=2989KiB/s (3061kB/s), 74.1KiB/s-2833KiB/s (75.9kB/s-2901kB/s), io=3064KiB (3138kB), run=1001-1025msec 00:10:29.475 WRITE: bw=9990KiB/s (10.2MB/s), 1998KiB/s-4092KiB/s (2046kB/s-4190kB/s), io=10.0MiB (10.5MB), run=1001-1025msec 00:10:29.475 00:10:29.475 Disk stats (read/write): 00:10:29.475 nvme0n1: ios=50/512, merge=0/0, ticks=909/218, in_queue=1127, util=98.80% 00:10:29.476 nvme0n2: ios=50/512, merge=0/0, ticks=1566/219, in_queue=1785, util=100.00% 00:10:29.476 nvme0n3: ios=571/993, merge=0/0, ticks=452/379, in_queue=831, util=100.00% 00:10:29.476 nvme0n4: ios=53/512, merge=0/0, ticks=1087/211, in_queue=1298, util=98.08% 00:10:29.476 11:33:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:29.476 [global] 00:10:29.476 thread=1 00:10:29.476 invalidate=1 00:10:29.476 rw=write 00:10:29.476 time_based=1 00:10:29.476 runtime=1 00:10:29.476 ioengine=libaio 00:10:29.476 direct=1 00:10:29.476 bs=4096 00:10:29.476 iodepth=128 00:10:29.476 norandommap=0 00:10:29.476 numjobs=1 00:10:29.476 00:10:29.476 verify_dump=1 00:10:29.476 verify_backlog=512 00:10:29.476 verify_state_save=0 00:10:29.476 do_verify=1 00:10:29.476 verify=crc32c-intel 00:10:29.476 [job0] 00:10:29.476 filename=/dev/nvme0n1 00:10:29.476 [job1] 00:10:29.476 filename=/dev/nvme0n2 00:10:29.476 [job2] 00:10:29.476 filename=/dev/nvme0n3 00:10:29.476 [job3] 00:10:29.476 filename=/dev/nvme0n4 00:10:29.476 Could not set queue depth (nvme0n1) 00:10:29.476 Could not set queue depth (nvme0n2) 00:10:29.476 Could not set queue depth (nvme0n3) 00:10:29.476 Could not set queue depth (nvme0n4) 00:10:29.736 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.736 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.736 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.736 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.736 fio-3.35 00:10:29.736 Starting 4 threads 00:10:31.122 00:10:31.122 job0: (groupid=0, jobs=1): err= 0: pid=933937: Fri Nov 15 11:33:56 2024 00:10:31.122 read: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec) 00:10:31.122 slat (nsec): min=890, max=10704k, avg=95545.71, stdev=649846.25 00:10:31.122 clat (usec): min=1784, max=48326, avg=12298.17, stdev=6904.97 00:10:31.122 lat (usec): min=1822, max=48328, avg=12393.72, stdev=6956.04 00:10:31.122 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 3032], 5.00th=[ 5080], 10.00th=[ 6915], 20.00th=[ 7898], 00:10:31.123 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[11731], 00:10:31.123 | 70.00th=[13960], 80.00th=[15664], 90.00th=[21103], 95.00th=[25035], 00:10:31.123 | 99.00th=[43254], 99.50th=[45351], 99.90th=[45351], 99.95th=[45351], 00:10:31.123 | 99.99th=[48497] 00:10:31.123 write: IOPS=4526, BW=17.7MiB/s (18.5MB/s)(17.8MiB/1007msec); 0 zone resets 00:10:31.123 slat (nsec): min=1547, max=14058k, avg=113937.86, stdev=778620.53 00:10:31.123 clat (usec): min=537, max=55310, avg=16989.16, stdev=12488.26 00:10:31.123 lat (usec): min=549, max=55319, avg=17103.10, stdev=12555.11 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 1926], 5.00th=[ 4490], 10.00th=[ 5538], 20.00th=[ 8029], 00:10:31.123 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[12125], 60.00th=[14615], 00:10:31.123 | 70.00th=[17695], 80.00th=[26608], 90.00th=[37487], 95.00th=[44827], 00:10:31.123 | 99.00th=[54789], 99.50th=[55313], 99.90th=[55313], 99.95th=[55313], 00:10:31.123 | 99.99th=[55313] 00:10:31.123 bw ( KiB/s): min=16384, max=19064, per=18.75%, avg=17724.00, stdev=1895.05, samples=2 00:10:31.123 iops : min= 4096, max= 4766, avg=4431.00, stdev=473.76, samples=2 00:10:31.123 lat (usec) : 750=0.05%, 1000=0.07% 00:10:31.123 lat (msec) : 2=0.64%, 4=2.63%, 10=42.33%, 20=34.31%, 50=18.64% 00:10:31.123 lat (msec) : 100=1.34% 00:10:31.123 cpu : usr=2.49%, sys=5.37%, ctx=395, majf=0, minf=1 00:10:31.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:31.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.123 issued rwts: total=4096,4558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.123 job1: (groupid=0, jobs=1): err= 0: pid=933938: Fri Nov 15 11:33:56 2024 00:10:31.123 read: IOPS=6252, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1003msec) 00:10:31.123 slat (nsec): min=891, max=14603k, avg=76764.13, stdev=644896.98 00:10:31.123 clat (usec): min=2068, max=41835, avg=10330.07, stdev=5387.37 00:10:31.123 lat (usec): min=2075, max=41863, avg=10406.84, stdev=5440.64 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 3916], 5.00th=[ 4752], 10.00th=[ 5407], 20.00th=[ 6325], 00:10:31.123 | 30.00th=[ 7242], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 8979], 00:10:31.123 | 70.00th=[11207], 80.00th=[14222], 90.00th=[17433], 95.00th=[20317], 00:10:31.123 | 99.00th=[29230], 99.50th=[29230], 99.90th=[34341], 99.95th=[34341], 00:10:31.123 | 99.99th=[41681] 00:10:31.123 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:10:31.123 slat (nsec): min=1563, max=6835.3k, avg=57502.32, stdev=391514.60 00:10:31.123 clat (usec): min=657, max=43315, avg=9397.16, stdev=7340.07 00:10:31.123 lat (usec): min=683, max=43319, avg=9454.67, stdev=7378.66 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 1336], 5.00th=[ 2966], 10.00th=[ 3720], 20.00th=[ 4948], 00:10:31.123 | 30.00th=[ 5473], 40.00th=[ 5866], 50.00th=[ 6783], 60.00th=[ 7570], 00:10:31.123 | 70.00th=[10290], 80.00th=[13304], 90.00th=[17695], 95.00th=[25297], 00:10:31.123 | 99.00th=[38536], 99.50th=[41157], 99.90th=[43254], 99.95th=[43254], 00:10:31.123 | 99.99th=[43254] 00:10:31.123 bw ( KiB/s): min=24464, max=28784, per=28.17%, avg=26624.00, stdev=3054.70, samples=2 00:10:31.123 iops : min= 6116, max= 7196, avg=6656.00, stdev=763.68, samples=2 00:10:31.123 lat (usec) : 750=0.04%, 1000=0.04% 00:10:31.123 lat (msec) : 2=1.18%, 4=5.65%, 10=60.53%, 20=25.74%, 50=6.82% 00:10:31.123 cpu : usr=5.49%, sys=6.69%, ctx=424, majf=0, minf=1 00:10:31.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:31.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.123 issued rwts: total=6271,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.123 job2: (groupid=0, jobs=1): err= 0: pid=933945: Fri Nov 15 11:33:56 2024 00:10:31.123 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:10:31.123 slat (nsec): min=928, max=15016k, avg=85364.74, stdev=618577.61 00:10:31.123 clat (usec): min=3842, max=45421, avg=10818.55, stdev=4965.15 00:10:31.123 lat (usec): min=3848, max=45448, avg=10903.92, stdev=5024.05 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 5276], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 8356], 00:10:31.123 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:10:31.123 | 70.00th=[10552], 80.00th=[12125], 90.00th=[15401], 95.00th=[25035], 00:10:31.123 | 99.00th=[32113], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:10:31.123 | 99.99th=[45351] 00:10:31.123 write: IOPS=6207, BW=24.2MiB/s (25.4MB/s)(24.4MiB/1005msec); 0 zone resets 00:10:31.123 slat (nsec): min=1599, max=13371k, avg=68303.82, stdev=407271.31 00:10:31.123 clat (usec): min=1018, max=46034, avg=9775.53, stdev=4865.68 00:10:31.123 lat (usec): min=1026, max=46056, avg=9843.84, stdev=4898.93 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 2040], 5.00th=[ 4686], 10.00th=[ 5800], 20.00th=[ 7832], 00:10:31.123 | 30.00th=[ 8160], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8586], 00:10:31.123 | 70.00th=[ 8848], 80.00th=[10421], 90.00th=[15795], 95.00th=[20579], 00:10:31.123 | 99.00th=[32637], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:10:31.123 | 99.99th=[45876] 00:10:31.123 bw ( KiB/s): min=23792, max=25360, per=26.00%, avg=24576.00, stdev=1108.74, samples=2 00:10:31.123 iops : min= 5948, max= 6340, avg=6144.00, stdev=277.19, samples=2 00:10:31.123 lat (msec) : 2=0.48%, 4=1.11%, 10=70.44%, 20=21.72%, 50=6.26% 00:10:31.123 cpu : usr=4.98%, sys=5.48%, ctx=612, majf=0, minf=2 00:10:31.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:31.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.123 issued rwts: total=6144,6239,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.123 job3: (groupid=0, jobs=1): err= 0: pid=933948: Fri Nov 15 11:33:56 2024 00:10:31.123 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:10:31.123 slat (nsec): min=919, max=7059.0k, avg=72005.35, stdev=483573.67 00:10:31.123 clat (usec): min=3725, max=22258, avg=9643.23, stdev=2143.05 00:10:31.123 lat (usec): min=3732, max=22263, avg=9715.23, stdev=2180.32 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 6128], 5.00th=[ 6915], 10.00th=[ 7832], 20.00th=[ 8455], 00:10:31.123 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:10:31.123 | 70.00th=[10028], 80.00th=[10552], 90.00th=[12256], 95.00th=[13698], 00:10:31.123 | 99.00th=[19006], 99.50th=[20317], 99.90th=[22152], 99.95th=[22152], 00:10:31.123 | 99.99th=[22152] 00:10:31.123 write: IOPS=6311, BW=24.7MiB/s (25.9MB/s)(24.8MiB/1005msec); 0 zone resets 00:10:31.123 slat (nsec): min=1565, max=12394k, avg=80439.69, stdev=483277.56 00:10:31.123 clat (usec): min=612, max=47256, avg=10755.90, stdev=7207.39 00:10:31.123 lat (usec): min=973, max=47264, avg=10836.34, stdev=7247.73 00:10:31.123 clat percentiles (usec): 00:10:31.123 | 1.00th=[ 3326], 5.00th=[ 5014], 10.00th=[ 6194], 20.00th=[ 7635], 00:10:31.123 | 30.00th=[ 8094], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8848], 00:10:31.123 | 70.00th=[ 9896], 80.00th=[12125], 90.00th=[15270], 95.00th=[24511], 00:10:31.123 | 99.00th=[45351], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:10:31.123 | 99.99th=[47449] 00:10:31.123 bw ( KiB/s): min=21488, max=28240, per=26.30%, avg=24864.00, stdev=4774.38, samples=2 00:10:31.123 iops : min= 5372, max= 7060, avg=6216.00, stdev=1193.60, samples=2 00:10:31.123 lat (usec) : 750=0.01%, 1000=0.07% 00:10:31.123 lat (msec) : 2=0.09%, 4=0.72%, 10=69.56%, 20=25.14%, 50=4.41% 00:10:31.123 cpu : usr=4.08%, sys=6.47%, ctx=629, majf=0, minf=1 00:10:31.123 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:31.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.123 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:31.123 issued rwts: total=6144,6343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.123 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:31.123 00:10:31.123 Run status group 0 (all jobs): 00:10:31.123 READ: bw=87.9MiB/s (92.1MB/s), 15.9MiB/s-24.4MiB/s (16.7MB/s-25.6MB/s), io=88.5MiB (92.8MB), run=1003-1007msec 00:10:31.123 WRITE: bw=92.3MiB/s (96.8MB/s), 17.7MiB/s-25.9MiB/s (18.5MB/s-27.2MB/s), io=93.0MiB (97.5MB), run=1003-1007msec 00:10:31.123 00:10:31.123 Disk stats (read/write): 00:10:31.123 nvme0n1: ios=3322/3584, merge=0/0, ticks=28125/44129, in_queue=72254, util=87.88% 00:10:31.123 nvme0n2: ios=5422/5632, merge=0/0, ticks=44017/48726, in_queue=92743, util=91.85% 00:10:31.123 nvme0n3: ios=5045/5120, merge=0/0, ticks=27753/27156, in_queue=54909, util=91.35% 00:10:31.123 nvme0n4: ios=5007/5120, merge=0/0, ticks=36278/44462, in_queue=80740, util=96.58% 00:10:31.123 11:33:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:31.123 [global] 00:10:31.123 thread=1 00:10:31.123 invalidate=1 00:10:31.123 rw=randwrite 00:10:31.123 time_based=1 00:10:31.123 runtime=1 00:10:31.123 ioengine=libaio 00:10:31.123 direct=1 00:10:31.123 bs=4096 00:10:31.123 iodepth=128 00:10:31.123 norandommap=0 00:10:31.123 numjobs=1 00:10:31.123 00:10:31.123 verify_dump=1 00:10:31.123 verify_backlog=512 00:10:31.123 verify_state_save=0 00:10:31.123 do_verify=1 00:10:31.123 verify=crc32c-intel 00:10:31.123 [job0] 00:10:31.123 filename=/dev/nvme0n1 00:10:31.123 [job1] 00:10:31.123 filename=/dev/nvme0n2 00:10:31.123 [job2] 00:10:31.123 filename=/dev/nvme0n3 00:10:31.123 [job3] 00:10:31.123 filename=/dev/nvme0n4 00:10:31.123 Could not set queue depth (nvme0n1) 00:10:31.123 Could not set queue depth (nvme0n2) 00:10:31.123 Could not set queue depth (nvme0n3) 00:10:31.123 Could not set queue depth (nvme0n4) 00:10:31.384 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.384 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.384 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.384 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:31.384 fio-3.35 00:10:31.384 Starting 4 threads 00:10:32.769 00:10:32.769 job0: (groupid=0, jobs=1): err= 0: pid=934441: Fri Nov 15 11:33:58 2024 00:10:32.769 read: IOPS=5058, BW=19.8MiB/s (20.7MB/s)(20.7MiB/1048msec) 00:10:32.769 slat (nsec): min=905, max=28363k, avg=85532.83, stdev=765798.81 00:10:32.769 clat (msec): min=2, max=128, avg=12.37, stdev=13.24 00:10:32.769 lat (msec): min=2, max=128, avg=12.46, stdev=13.31 00:10:32.769 clat percentiles (msec): 00:10:32.769 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:10:32.769 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 11], 00:10:32.769 | 70.00th=[ 12], 80.00th=[ 14], 90.00th=[ 17], 95.00th=[ 21], 00:10:32.769 | 99.00th=[ 93], 99.50th=[ 110], 99.90th=[ 128], 99.95th=[ 129], 00:10:32.769 | 99.99th=[ 129] 00:10:32.769 write: IOPS=5374, BW=21.0MiB/s (22.0MB/s)(22.0MiB/1048msec); 0 zone resets 00:10:32.769 slat (nsec): min=1515, max=11351k, avg=89449.76, stdev=595903.97 00:10:32.769 clat (usec): min=1109, max=79526, avg=11737.32, stdev=11118.91 00:10:32.769 lat (usec): min=1117, max=79535, avg=11826.77, stdev=11182.25 00:10:32.769 clat percentiles (usec): 00:10:32.769 | 1.00th=[ 2933], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 6325], 00:10:32.769 | 30.00th=[ 6849], 40.00th=[ 7308], 50.00th=[ 7898], 60.00th=[ 8586], 00:10:32.769 | 70.00th=[11469], 80.00th=[14484], 90.00th=[21365], 95.00th=[36439], 00:10:32.769 | 99.00th=[64226], 99.50th=[67634], 99.90th=[79168], 99.95th=[79168], 00:10:32.769 | 99.99th=[79168] 00:10:32.769 bw ( KiB/s): min=16384, max=28672, per=23.53%, avg=22528.00, stdev=8688.93, samples=2 00:10:32.769 iops : min= 4096, max= 7168, avg=5632.00, stdev=2172.23, samples=2 00:10:32.769 lat (msec) : 2=0.17%, 4=1.83%, 10=59.12%, 20=30.22%, 50=6.14% 00:10:32.769 lat (msec) : 100=2.24%, 250=0.27% 00:10:32.769 cpu : usr=3.44%, sys=6.21%, ctx=427, majf=0, minf=1 00:10:32.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:32.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.769 issued rwts: total=5301,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.769 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.769 job1: (groupid=0, jobs=1): err= 0: pid=934458: Fri Nov 15 11:33:58 2024 00:10:32.769 read: IOPS=6700, BW=26.2MiB/s (27.4MB/s)(26.3MiB/1004msec) 00:10:32.769 slat (nsec): min=884, max=2830.8k, avg=74471.47, stdev=340446.29 00:10:32.769 clat (usec): min=1014, max=11431, avg=9421.86, stdev=957.64 00:10:32.769 lat (usec): min=3562, max=11903, avg=9496.33, stdev=910.84 00:10:32.769 clat percentiles (usec): 00:10:32.769 | 1.00th=[ 6783], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:10:32.769 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:10:32.769 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10683], 00:10:32.769 | 99.00th=[10814], 99.50th=[11207], 99.90th=[11469], 99.95th=[11469], 00:10:32.769 | 99.99th=[11469] 00:10:32.769 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:10:32.769 slat (nsec): min=1480, max=2701.7k, avg=67694.26, stdev=343368.88 00:10:32.769 clat (usec): min=5934, max=11594, avg=8870.89, stdev=925.59 00:10:32.769 lat (usec): min=5937, max=11595, avg=8938.58, stdev=869.44 00:10:32.769 clat percentiles (usec): 00:10:32.769 | 1.00th=[ 6587], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 8455], 00:10:32.769 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:10:32.769 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10028], 95.00th=[10290], 00:10:32.769 | 99.00th=[10945], 99.50th=[11338], 99.90th=[11600], 99.95th=[11600], 00:10:32.769 | 99.99th=[11600] 00:10:32.769 bw ( KiB/s): min=28216, max=28672, per=29.70%, avg=28444.00, stdev=322.44, samples=2 00:10:32.769 iops : min= 7054, max= 7168, avg=7111.00, stdev=80.61, samples=2 00:10:32.769 lat (msec) : 2=0.01%, 4=0.23%, 10=78.34%, 20=21.42% 00:10:32.769 cpu : usr=2.29%, sys=3.59%, ctx=733, majf=0, minf=2 00:10:32.769 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:32.769 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.769 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.769 issued rwts: total=6727,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.769 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.769 job2: (groupid=0, jobs=1): err= 0: pid=934470: Fri Nov 15 11:33:58 2024 00:10:32.769 read: IOPS=6688, BW=26.1MiB/s (27.4MB/s)(26.3MiB/1007msec) 00:10:32.769 slat (nsec): min=975, max=8850.8k, avg=76641.01, stdev=578836.92 00:10:32.769 clat (usec): min=3261, max=21069, avg=9891.83, stdev=2410.55 00:10:32.769 lat (usec): min=3270, max=21071, avg=9968.47, stdev=2447.70 00:10:32.769 clat percentiles (usec): 00:10:32.769 | 1.00th=[ 4228], 5.00th=[ 6783], 10.00th=[ 7504], 20.00th=[ 8094], 00:10:32.769 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9765], 60.00th=[10159], 00:10:32.769 | 70.00th=[10683], 80.00th=[11338], 90.00th=[13173], 95.00th=[14746], 00:10:32.769 | 99.00th=[17433], 99.50th=[18482], 99.90th=[20317], 99.95th=[21103], 00:10:32.769 | 99.99th=[21103] 00:10:32.769 write: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec); 0 zone resets 00:10:32.769 slat (nsec): min=1673, max=8680.9k, avg=62306.39, stdev=414530.49 00:10:32.770 clat (usec): min=2002, max=21069, avg=8487.03, stdev=2386.53 00:10:32.770 lat (usec): min=2307, max=21071, avg=8549.33, stdev=2410.42 00:10:32.770 clat percentiles (usec): 00:10:32.770 | 1.00th=[ 3294], 5.00th=[ 4555], 10.00th=[ 4883], 20.00th=[ 6390], 00:10:32.770 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9110], 00:10:32.770 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[10945], 95.00th=[12256], 00:10:32.770 | 99.00th=[16057], 99.50th=[16712], 99.90th=[17957], 99.95th=[18220], 00:10:32.770 | 99.99th=[21103] 00:10:32.770 bw ( KiB/s): min=28080, max=28872, per=29.74%, avg=28476.00, stdev=560.03, samples=2 00:10:32.770 iops : min= 7020, max= 7218, avg=7119.00, stdev=140.01, samples=2 00:10:32.770 lat (msec) : 4=1.69%, 10=68.42%, 20=29.78%, 50=0.11% 00:10:32.770 cpu : usr=4.17%, sys=7.95%, ctx=604, majf=0, minf=1 00:10:32.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:32.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.770 issued rwts: total=6735,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.770 job3: (groupid=0, jobs=1): err= 0: pid=934474: Fri Nov 15 11:33:58 2024 00:10:32.770 read: IOPS=4828, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1003msec) 00:10:32.770 slat (nsec): min=975, max=16729k, avg=95188.82, stdev=745901.84 00:10:32.770 clat (usec): min=1477, max=90629, avg=11916.42, stdev=8546.40 00:10:32.770 lat (usec): min=1493, max=90636, avg=12011.61, stdev=8640.60 00:10:32.770 clat percentiles (usec): 00:10:32.770 | 1.00th=[ 3621], 5.00th=[ 5997], 10.00th=[ 7308], 20.00th=[ 8455], 00:10:32.770 | 30.00th=[ 8848], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[10159], 00:10:32.770 | 70.00th=[11600], 80.00th=[14353], 90.00th=[17957], 95.00th=[21890], 00:10:32.770 | 99.00th=[58983], 99.50th=[72877], 99.90th=[90702], 99.95th=[90702], 00:10:32.770 | 99.99th=[90702] 00:10:32.770 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:32.770 slat (nsec): min=1653, max=10537k, avg=90708.20, stdev=559754.81 00:10:32.770 clat (usec): min=841, max=90625, avg=13582.83, stdev=14574.08 00:10:32.770 lat (usec): min=853, max=90636, avg=13673.54, stdev=14666.83 00:10:32.770 clat percentiles (usec): 00:10:32.770 | 1.00th=[ 2114], 5.00th=[ 3884], 10.00th=[ 4883], 20.00th=[ 6063], 00:10:32.770 | 30.00th=[ 7504], 40.00th=[ 8094], 50.00th=[ 8586], 60.00th=[ 9372], 00:10:32.770 | 70.00th=[11469], 80.00th=[14615], 90.00th=[28181], 95.00th=[60031], 00:10:32.770 | 99.00th=[69731], 99.50th=[70779], 99.90th=[79168], 99.95th=[79168], 00:10:32.770 | 99.99th=[90702] 00:10:32.770 bw ( KiB/s): min=17072, max=23888, per=21.39%, avg=20480.00, stdev=4819.64, samples=2 00:10:32.770 iops : min= 4268, max= 5972, avg=5120.00, stdev=1204.91, samples=2 00:10:32.770 lat (usec) : 1000=0.03% 00:10:32.770 lat (msec) : 2=0.70%, 4=2.76%, 10=57.82%, 20=29.13%, 50=5.79% 00:10:32.770 lat (msec) : 100=3.76% 00:10:32.770 cpu : usr=3.49%, sys=5.99%, ctx=433, majf=0, minf=1 00:10:32.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:32.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.770 issued rwts: total=4843,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.770 00:10:32.770 Run status group 0 (all jobs): 00:10:32.770 READ: bw=88.0MiB/s (92.3MB/s), 18.9MiB/s-26.2MiB/s (19.8MB/s-27.4MB/s), io=92.2MiB (96.7MB), run=1003-1048msec 00:10:32.770 WRITE: bw=93.5MiB/s (98.1MB/s), 19.9MiB/s-27.9MiB/s (20.9MB/s-29.2MB/s), io=98.0MiB (103MB), run=1003-1048msec 00:10:32.770 00:10:32.770 Disk stats (read/write): 00:10:32.770 nvme0n1: ios=5350/5632, merge=0/0, ticks=52940/61585, in_queue=114525, util=85.73% 00:10:32.770 nvme0n2: ios=5170/5632, merge=0/0, ticks=12043/11598, in_queue=23641, util=85.89% 00:10:32.770 nvme0n3: ios=5143/5479, merge=0/0, ticks=48932/45480, in_queue=94412, util=97.78% 00:10:32.770 nvme0n4: ios=3629/4084, merge=0/0, ticks=40382/55952, in_queue=96334, util=95.78% 00:10:32.770 11:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:32.770 11:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=934784 00:10:32.770 11:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:32.770 11:33:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:32.770 [global] 00:10:32.770 thread=1 00:10:32.770 invalidate=1 00:10:32.770 rw=read 00:10:32.770 time_based=1 00:10:32.770 runtime=10 00:10:32.770 ioengine=libaio 00:10:32.770 direct=1 00:10:32.770 bs=4096 00:10:32.770 iodepth=1 00:10:32.770 norandommap=1 00:10:32.770 numjobs=1 00:10:32.770 00:10:32.770 [job0] 00:10:32.770 filename=/dev/nvme0n1 00:10:32.770 [job1] 00:10:32.770 filename=/dev/nvme0n2 00:10:32.770 [job2] 00:10:32.770 filename=/dev/nvme0n3 00:10:32.770 [job3] 00:10:32.770 filename=/dev/nvme0n4 00:10:33.054 Could not set queue depth (nvme0n1) 00:10:33.054 Could not set queue depth (nvme0n2) 00:10:33.054 Could not set queue depth (nvme0n3) 00:10:33.054 Could not set queue depth (nvme0n4) 00:10:33.335 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.335 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.335 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.335 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:33.335 fio-3.35 00:10:33.335 Starting 4 threads 00:10:35.872 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:35.872 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=9363456, buflen=4096 00:10:35.872 fio: pid=935046, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.132 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:36.132 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=274432, buflen=4096 00:10:36.132 fio: pid=935033, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.132 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.132 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:36.391 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11124736, buflen=4096 00:10:36.391 fio: pid=934988, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.391 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.391 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:36.666 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12046336, buflen=4096 00:10:36.666 fio: pid=934999, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:36.666 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.666 11:34:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:36.666 00:10:36.666 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=934988: Fri Nov 15 11:34:01 2024 00:10:36.666 read: IOPS=934, BW=3738KiB/s (3828kB/s)(10.6MiB/2906msec) 00:10:36.666 slat (usec): min=6, max=15953, avg=34.02, stdev=332.00 00:10:36.666 clat (usec): min=460, max=41295, avg=1021.21, stdev=780.20 00:10:36.666 lat (usec): min=485, max=41321, avg=1055.23, stdev=848.92 00:10:36.666 clat percentiles (usec): 00:10:36.666 | 1.00th=[ 676], 5.00th=[ 816], 10.00th=[ 865], 20.00th=[ 938], 00:10:36.666 | 30.00th=[ 979], 40.00th=[ 1004], 50.00th=[ 1020], 60.00th=[ 1045], 00:10:36.666 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:36.666 | 99.00th=[ 1205], 99.50th=[ 1434], 99.90th=[ 1631], 99.95th=[ 1729], 00:10:36.666 | 99.99th=[41157] 00:10:36.666 bw ( KiB/s): min= 3449, max= 4048, per=36.52%, avg=3784.20, stdev=216.57, samples=5 00:10:36.666 iops : min= 862, max= 1012, avg=946.00, stdev=54.24, samples=5 00:10:36.666 lat (usec) : 500=0.04%, 750=1.99%, 1000=36.95% 00:10:36.666 lat (msec) : 2=60.95%, 50=0.04% 00:10:36.666 cpu : usr=1.03%, sys=2.82%, ctx=2719, majf=0, minf=1 00:10:36.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 issued rwts: total=2717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.666 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=934999: Fri Nov 15 11:34:01 2024 00:10:36.666 read: IOPS=951, BW=3805KiB/s (3896kB/s)(11.5MiB/3092msec) 00:10:36.666 slat (usec): min=6, max=24337, avg=53.36, stdev=628.39 00:10:36.666 clat (usec): min=449, max=34244, avg=983.45, stdev=621.32 00:10:36.666 lat (usec): min=476, max=34272, avg=1036.82, stdev=885.26 00:10:36.666 clat percentiles (usec): 00:10:36.666 | 1.00th=[ 668], 5.00th=[ 791], 10.00th=[ 848], 20.00th=[ 914], 00:10:36.666 | 30.00th=[ 947], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1004], 00:10:36.666 | 70.00th=[ 1020], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1090], 00:10:36.666 | 99.00th=[ 1172], 99.50th=[ 1205], 99.90th=[ 1745], 99.95th=[ 1860], 00:10:36.666 | 99.99th=[34341] 00:10:36.666 bw ( KiB/s): min= 3449, max= 4024, per=37.03%, avg=3837.50, stdev=232.50, samples=6 00:10:36.666 iops : min= 862, max= 1006, avg=959.33, stdev=58.21, samples=6 00:10:36.666 lat (usec) : 500=0.17%, 750=2.65%, 1000=55.85% 00:10:36.666 lat (msec) : 2=41.26%, 50=0.03% 00:10:36.666 cpu : usr=1.71%, sys=3.91%, ctx=2951, majf=0, minf=2 00:10:36.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 issued rwts: total=2942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.666 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=935033: Fri Nov 15 11:34:01 2024 00:10:36.666 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2730msec) 00:10:36.666 slat (usec): min=5, max=131, avg=26.53, stdev=14.16 00:10:36.666 clat (usec): min=885, max=45628, avg=40396.16, stdev=6882.54 00:10:36.666 lat (usec): min=929, max=45654, avg=40422.95, stdev=6882.20 00:10:36.666 clat percentiles (usec): 00:10:36.666 | 1.00th=[ 889], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:36.666 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:10:36.666 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:36.666 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:36.666 | 99.99th=[45876] 00:10:36.666 bw ( KiB/s): min= 96, max= 103, per=0.94%, avg=97.40, stdev= 3.13, samples=5 00:10:36.666 iops : min= 24, max= 25, avg=24.20, stdev= 0.45, samples=5 00:10:36.666 lat (usec) : 1000=1.47% 00:10:36.666 lat (msec) : 4=1.47%, 50=95.59% 00:10:36.666 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=1 00:10:36.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.666 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=935046: Fri Nov 15 11:34:01 2024 00:10:36.666 read: IOPS=898, BW=3594KiB/s (3681kB/s)(9144KiB/2544msec) 00:10:36.666 slat (nsec): min=7110, max=60684, avg=26666.11, stdev=2895.65 00:10:36.666 clat (usec): min=532, max=41938, avg=1069.60, stdev=1747.07 00:10:36.666 lat (usec): min=559, max=41965, avg=1096.26, stdev=1746.93 00:10:36.666 clat percentiles (usec): 00:10:36.666 | 1.00th=[ 676], 5.00th=[ 807], 10.00th=[ 848], 20.00th=[ 906], 00:10:36.666 | 30.00th=[ 947], 40.00th=[ 979], 50.00th=[ 1004], 60.00th=[ 1020], 00:10:36.666 | 70.00th=[ 1045], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:36.666 | 99.00th=[ 1270], 99.50th=[ 1483], 99.90th=[41157], 99.95th=[41681], 00:10:36.666 | 99.99th=[41681] 00:10:36.666 bw ( KiB/s): min= 2920, max= 3968, per=35.00%, avg=3627.40, stdev=459.32, samples=5 00:10:36.666 iops : min= 730, max= 992, avg=906.80, stdev=114.86, samples=5 00:10:36.666 lat (usec) : 750=2.23%, 1000=47.57% 00:10:36.666 lat (msec) : 2=49.93%, 50=0.22% 00:10:36.666 cpu : usr=0.55%, sys=3.19%, ctx=2289, majf=0, minf=2 00:10:36.666 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:36.666 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.666 issued rwts: total=2287,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.666 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:36.667 00:10:36.667 Run status group 0 (all jobs): 00:10:36.667 READ: bw=10.1MiB/s (10.6MB/s), 98.2KiB/s-3805KiB/s (101kB/s-3896kB/s), io=31.3MiB (32.8MB), run=2544-3092msec 00:10:36.667 00:10:36.667 Disk stats (read/write): 00:10:36.667 nvme0n1: ios=2621/0, merge=0/0, ticks=2604/0, in_queue=2604, util=92.29% 00:10:36.667 nvme0n2: ios=2937/0, merge=0/0, ticks=2998/0, in_queue=2998, util=96.70% 00:10:36.667 nvme0n3: ios=61/0, merge=0/0, ticks=2499/0, in_queue=2499, util=95.59% 00:10:36.667 nvme0n4: ios=2322/0, merge=0/0, ticks=3192/0, in_queue=3192, util=99.92% 00:10:36.667 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.667 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:36.957 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:36.957 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:37.248 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.248 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:37.248 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:37.248 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 934784 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:37.524 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:37.524 nvmf hotplug test: fio failed as expected 00:10:37.524 11:34:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:37.786 rmmod nvme_tcp 00:10:37.786 rmmod nvme_fabrics 00:10:37.786 rmmod nvme_keyring 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 931202 ']' 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 931202 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 931202 ']' 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 931202 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:37.786 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 931202 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 931202' 00:10:38.046 killing process with pid 931202 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 931202 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 931202 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:38.046 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.047 11:34:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:40.596 00:10:40.596 real 0m29.573s 00:10:40.596 user 2m45.017s 00:10:40.596 sys 0m9.802s 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.596 ************************************ 00:10:40.596 END TEST nvmf_fio_target 00:10:40.596 ************************************ 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:40.596 ************************************ 00:10:40.596 START TEST nvmf_bdevio 00:10:40.596 ************************************ 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:40.596 * Looking for test storage... 00:10:40.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:40.596 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:40.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.597 --rc genhtml_branch_coverage=1 00:10:40.597 --rc genhtml_function_coverage=1 00:10:40.597 --rc genhtml_legend=1 00:10:40.597 --rc geninfo_all_blocks=1 00:10:40.597 --rc geninfo_unexecuted_blocks=1 00:10:40.597 00:10:40.597 ' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:40.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.597 --rc genhtml_branch_coverage=1 00:10:40.597 --rc genhtml_function_coverage=1 00:10:40.597 --rc genhtml_legend=1 00:10:40.597 --rc geninfo_all_blocks=1 00:10:40.597 --rc geninfo_unexecuted_blocks=1 00:10:40.597 00:10:40.597 ' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:40.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.597 --rc genhtml_branch_coverage=1 00:10:40.597 --rc genhtml_function_coverage=1 00:10:40.597 --rc genhtml_legend=1 00:10:40.597 --rc geninfo_all_blocks=1 00:10:40.597 --rc geninfo_unexecuted_blocks=1 00:10:40.597 00:10:40.597 ' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:40.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:40.597 --rc genhtml_branch_coverage=1 00:10:40.597 --rc genhtml_function_coverage=1 00:10:40.597 --rc genhtml_legend=1 00:10:40.597 --rc geninfo_all_blocks=1 00:10:40.597 --rc geninfo_unexecuted_blocks=1 00:10:40.597 00:10:40.597 ' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:40.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:40.597 11:34:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:48.745 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:48.746 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:48.746 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:48.746 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:48.746 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:48.746 11:34:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:48.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:48.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.541 ms 00:10:48.746 00:10:48.746 --- 10.0.0.2 ping statistics --- 00:10:48.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.746 rtt min/avg/max/mdev = 0.541/0.541/0.541/0.000 ms 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:48.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:48.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:10:48.746 00:10:48.746 --- 10.0.0.1 ping statistics --- 00:10:48.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:48.746 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=940336 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 940336 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 940336 ']' 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:48.746 11:34:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.746 [2024-11-15 11:34:13.391367] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:48.747 [2024-11-15 11:34:13.391435] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:48.747 [2024-11-15 11:34:13.493105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:48.747 [2024-11-15 11:34:13.545121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:48.747 [2024-11-15 11:34:13.545175] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:48.747 [2024-11-15 11:34:13.545183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:48.747 [2024-11-15 11:34:13.545191] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:48.747 [2024-11-15 11:34:13.545197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:48.747 [2024-11-15 11:34:13.547289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:48.747 [2024-11-15 11:34:13.547447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:48.747 [2024-11-15 11:34:13.547648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:48.747 [2024-11-15 11:34:13.547854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:48.747 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:48.747 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:48.747 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:48.747 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.747 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 [2024-11-15 11:34:14.268910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 Malloc0 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:49.006 [2024-11-15 11:34:14.349418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:49.006 { 00:10:49.006 "params": { 00:10:49.006 "name": "Nvme$subsystem", 00:10:49.006 "trtype": "$TEST_TRANSPORT", 00:10:49.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:49.006 "adrfam": "ipv4", 00:10:49.006 "trsvcid": "$NVMF_PORT", 00:10:49.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:49.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:49.006 "hdgst": ${hdgst:-false}, 00:10:49.006 "ddgst": ${ddgst:-false} 00:10:49.006 }, 00:10:49.006 "method": "bdev_nvme_attach_controller" 00:10:49.006 } 00:10:49.006 EOF 00:10:49.006 )") 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:49.006 11:34:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:49.006 "params": { 00:10:49.006 "name": "Nvme1", 00:10:49.006 "trtype": "tcp", 00:10:49.006 "traddr": "10.0.0.2", 00:10:49.006 "adrfam": "ipv4", 00:10:49.006 "trsvcid": "4420", 00:10:49.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:49.006 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:49.006 "hdgst": false, 00:10:49.006 "ddgst": false 00:10:49.006 }, 00:10:49.006 "method": "bdev_nvme_attach_controller" 00:10:49.006 }' 00:10:49.006 [2024-11-15 11:34:14.408359] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:10:49.006 [2024-11-15 11:34:14.408423] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid940452 ] 00:10:49.006 [2024-11-15 11:34:14.499966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:49.265 [2024-11-15 11:34:14.557103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.265 [2024-11-15 11:34:14.557274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.265 [2024-11-15 11:34:14.557274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.523 I/O targets: 00:10:49.523 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:49.523 00:10:49.523 00:10:49.523 CUnit - A unit testing framework for C - Version 2.1-3 00:10:49.523 http://cunit.sourceforge.net/ 00:10:49.523 00:10:49.523 00:10:49.523 Suite: bdevio tests on: Nvme1n1 00:10:49.523 Test: blockdev write read block ...passed 00:10:49.780 Test: blockdev write zeroes read block ...passed 00:10:49.780 Test: blockdev write zeroes read no split ...passed 00:10:49.780 Test: blockdev write zeroes read split ...passed 00:10:49.780 Test: blockdev write zeroes read split partial ...passed 00:10:49.780 Test: blockdev reset ...[2024-11-15 11:34:15.052820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:49.780 [2024-11-15 11:34:15.052917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af0970 (9): Bad file descriptor 00:10:49.780 [2024-11-15 11:34:15.110188] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:49.780 passed 00:10:49.780 Test: blockdev write read 8 blocks ...passed 00:10:49.780 Test: blockdev write read size > 128k ...passed 00:10:49.780 Test: blockdev write read invalid size ...passed 00:10:49.780 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:49.780 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:49.780 Test: blockdev write read max offset ...passed 00:10:49.780 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:49.780 Test: blockdev writev readv 8 blocks ...passed 00:10:49.780 Test: blockdev writev readv 30 x 1block ...passed 00:10:50.038 Test: blockdev writev readv block ...passed 00:10:50.038 Test: blockdev writev readv size > 128k ...passed 00:10:50.038 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:50.038 Test: blockdev comparev and writev ...[2024-11-15 11:34:15.296055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.038 [2024-11-15 11:34:15.296103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:50.038 [2024-11-15 11:34:15.296121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.296130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.296680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.296700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.296714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.296723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.297285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.297296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.297310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.297318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.297838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.297851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.297865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:50.039 [2024-11-15 11:34:15.297873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:50.039 passed 00:10:50.039 Test: blockdev nvme passthru rw ...passed 00:10:50.039 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:34:15.382485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.039 [2024-11-15 11:34:15.382499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.382938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.039 [2024-11-15 11:34:15.382949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.383328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.039 [2024-11-15 11:34:15.383339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:50.039 [2024-11-15 11:34:15.383717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:50.039 [2024-11-15 11:34:15.383730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:50.039 passed 00:10:50.039 Test: blockdev nvme admin passthru ...passed 00:10:50.039 Test: blockdev copy ...passed 00:10:50.039 00:10:50.039 Run Summary: Type Total Ran Passed Failed Inactive 00:10:50.039 suites 1 1 n/a 0 0 00:10:50.039 tests 23 23 23 0 0 00:10:50.039 asserts 152 152 152 0 n/a 00:10:50.039 00:10:50.039 Elapsed time = 1.039 seconds 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:50.297 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:50.298 rmmod nvme_tcp 00:10:50.298 rmmod nvme_fabrics 00:10:50.298 rmmod nvme_keyring 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 940336 ']' 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 940336 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 940336 ']' 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 940336 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 940336 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 940336' 00:10:50.298 killing process with pid 940336 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 940336 00:10:50.298 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 940336 00:10:50.558 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.559 11:34:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.475 11:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.475 00:10:52.475 real 0m12.365s 00:10:52.475 user 0m13.899s 00:10:52.475 sys 0m6.283s 00:10:52.475 11:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.475 11:34:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:52.475 ************************************ 00:10:52.475 END TEST nvmf_bdevio 00:10:52.475 ************************************ 00:10:52.475 11:34:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:52.475 00:10:52.475 real 5m5.542s 00:10:52.475 user 11m56.600s 00:10:52.475 sys 1m51.098s 00:10:52.475 11:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:52.475 11:34:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.736 ************************************ 00:10:52.736 END TEST nvmf_target_core 00:10:52.736 ************************************ 00:10:52.736 11:34:18 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:52.736 11:34:18 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:52.736 11:34:18 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:52.736 11:34:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:52.736 ************************************ 00:10:52.736 START TEST nvmf_target_extra 00:10:52.736 ************************************ 00:10:52.736 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:52.736 * Looking for test storage... 00:10:52.736 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:52.736 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:52.736 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:52.737 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:52.999 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.000 --rc genhtml_branch_coverage=1 00:10:53.000 --rc genhtml_function_coverage=1 00:10:53.000 --rc genhtml_legend=1 00:10:53.000 --rc geninfo_all_blocks=1 00:10:53.000 --rc geninfo_unexecuted_blocks=1 00:10:53.000 00:10:53.000 ' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.000 --rc genhtml_branch_coverage=1 00:10:53.000 --rc genhtml_function_coverage=1 00:10:53.000 --rc genhtml_legend=1 00:10:53.000 --rc geninfo_all_blocks=1 00:10:53.000 --rc geninfo_unexecuted_blocks=1 00:10:53.000 00:10:53.000 ' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.000 --rc genhtml_branch_coverage=1 00:10:53.000 --rc genhtml_function_coverage=1 00:10:53.000 --rc genhtml_legend=1 00:10:53.000 --rc geninfo_all_blocks=1 00:10:53.000 --rc geninfo_unexecuted_blocks=1 00:10:53.000 00:10:53.000 ' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.000 --rc genhtml_branch_coverage=1 00:10:53.000 --rc genhtml_function_coverage=1 00:10:53.000 --rc genhtml_legend=1 00:10:53.000 --rc geninfo_all_blocks=1 00:10:53.000 --rc geninfo_unexecuted_blocks=1 00:10:53.000 00:10:53.000 ' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.000 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:53.000 ************************************ 00:10:53.000 START TEST nvmf_example 00:10:53.000 ************************************ 00:10:53.000 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:53.000 * Looking for test storage... 00:10:53.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.001 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:53.001 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:53.001 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:53.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.262 --rc genhtml_branch_coverage=1 00:10:53.262 --rc genhtml_function_coverage=1 00:10:53.262 --rc genhtml_legend=1 00:10:53.262 --rc geninfo_all_blocks=1 00:10:53.262 --rc geninfo_unexecuted_blocks=1 00:10:53.262 00:10:53.262 ' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:53.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.262 --rc genhtml_branch_coverage=1 00:10:53.262 --rc genhtml_function_coverage=1 00:10:53.262 --rc genhtml_legend=1 00:10:53.262 --rc geninfo_all_blocks=1 00:10:53.262 --rc geninfo_unexecuted_blocks=1 00:10:53.262 00:10:53.262 ' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:53.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.262 --rc genhtml_branch_coverage=1 00:10:53.262 --rc genhtml_function_coverage=1 00:10:53.262 --rc genhtml_legend=1 00:10:53.262 --rc geninfo_all_blocks=1 00:10:53.262 --rc geninfo_unexecuted_blocks=1 00:10:53.262 00:10:53.262 ' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:53.262 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.262 --rc genhtml_branch_coverage=1 00:10:53.262 --rc genhtml_function_coverage=1 00:10:53.262 --rc genhtml_legend=1 00:10:53.262 --rc geninfo_all_blocks=1 00:10:53.262 --rc geninfo_unexecuted_blocks=1 00:10:53.262 00:10:53.262 ' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.262 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:53.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:53.263 11:34:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:01.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:01.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:01.414 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:01.414 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.414 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.415 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.415 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.415 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.415 11:34:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:11:01.415 00:11:01.415 --- 10.0.0.2 ping statistics --- 00:11:01.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.415 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:11:01.415 00:11:01.415 --- 10.0.0.1 ping statistics --- 00:11:01.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.415 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=945102 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 945102 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 945102 ']' 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:01.415 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.676 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:11:01.676 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:01.676 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:01.676 11:34:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:01.676 11:34:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.893 Initializing NVMe Controllers 00:11:13.893 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:13.893 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:13.893 Initialization complete. Launching workers. 00:11:13.893 ======================================================== 00:11:13.893 Latency(us) 00:11:13.893 Device Information : IOPS MiB/s Average min max 00:11:13.893 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18457.38 72.10 3467.12 627.34 20002.16 00:11:13.893 ======================================================== 00:11:13.893 Total : 18457.38 72.10 3467.12 627.34 20002.16 00:11:13.893 00:11:13.893 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:13.893 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:13.893 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.893 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:13.893 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.894 rmmod nvme_tcp 00:11:13.894 rmmod nvme_fabrics 00:11:13.894 rmmod nvme_keyring 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 945102 ']' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 945102 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 945102 ']' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 945102 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 945102 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 945102' 00:11:13.894 killing process with pid 945102 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 945102 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 945102 00:11:13.894 nvmf threads initialize successfully 00:11:13.894 bdev subsystem init successfully 00:11:13.894 created a nvmf target service 00:11:13.894 create targets's poll groups done 00:11:13.894 all subsystems of target started 00:11:13.894 nvmf target is running 00:11:13.894 all subsystems of target stopped 00:11:13.894 destroy targets's poll groups done 00:11:13.894 destroyed the nvmf target service 00:11:13.894 bdev subsystem finish successfully 00:11:13.894 nvmf threads destroy successfully 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.894 11:34:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.465 00:11:14.465 real 0m21.413s 00:11:14.465 user 0m46.698s 00:11:14.465 sys 0m6.976s 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.465 ************************************ 00:11:14.465 END TEST nvmf_example 00:11:14.465 ************************************ 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.465 ************************************ 00:11:14.465 START TEST nvmf_filesystem 00:11:14.465 ************************************ 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.465 * Looking for test storage... 00:11:14.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:14.465 11:34:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:14.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.729 --rc genhtml_branch_coverage=1 00:11:14.729 --rc genhtml_function_coverage=1 00:11:14.729 --rc genhtml_legend=1 00:11:14.729 --rc geninfo_all_blocks=1 00:11:14.729 --rc geninfo_unexecuted_blocks=1 00:11:14.729 00:11:14.729 ' 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:14.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.729 --rc genhtml_branch_coverage=1 00:11:14.729 --rc genhtml_function_coverage=1 00:11:14.729 --rc genhtml_legend=1 00:11:14.729 --rc geninfo_all_blocks=1 00:11:14.729 --rc geninfo_unexecuted_blocks=1 00:11:14.729 00:11:14.729 ' 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:14.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.729 --rc genhtml_branch_coverage=1 00:11:14.729 --rc genhtml_function_coverage=1 00:11:14.729 --rc genhtml_legend=1 00:11:14.729 --rc geninfo_all_blocks=1 00:11:14.729 --rc geninfo_unexecuted_blocks=1 00:11:14.729 00:11:14.729 ' 00:11:14.729 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:14.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.729 --rc genhtml_branch_coverage=1 00:11:14.729 --rc genhtml_function_coverage=1 00:11:14.729 --rc genhtml_legend=1 00:11:14.729 --rc geninfo_all_blocks=1 00:11:14.729 --rc geninfo_unexecuted_blocks=1 00:11:14.729 00:11:14.729 ' 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.730 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:14.731 #define SPDK_CONFIG_H 00:11:14.731 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:14.731 #define SPDK_CONFIG_APPS 1 00:11:14.731 #define SPDK_CONFIG_ARCH native 00:11:14.731 #undef SPDK_CONFIG_ASAN 00:11:14.731 #undef SPDK_CONFIG_AVAHI 00:11:14.731 #undef SPDK_CONFIG_CET 00:11:14.731 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:14.731 #define SPDK_CONFIG_COVERAGE 1 00:11:14.731 #define SPDK_CONFIG_CROSS_PREFIX 00:11:14.731 #undef SPDK_CONFIG_CRYPTO 00:11:14.731 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:14.731 #undef SPDK_CONFIG_CUSTOMOCF 00:11:14.731 #undef SPDK_CONFIG_DAOS 00:11:14.731 #define SPDK_CONFIG_DAOS_DIR 00:11:14.731 #define SPDK_CONFIG_DEBUG 1 00:11:14.731 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:14.731 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:14.731 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:14.731 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:14.731 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:14.731 #undef SPDK_CONFIG_DPDK_UADK 00:11:14.731 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.731 #define SPDK_CONFIG_EXAMPLES 1 00:11:14.731 #undef SPDK_CONFIG_FC 00:11:14.731 #define SPDK_CONFIG_FC_PATH 00:11:14.731 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:14.731 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:14.731 #define SPDK_CONFIG_FSDEV 1 00:11:14.731 #undef SPDK_CONFIG_FUSE 00:11:14.731 #undef SPDK_CONFIG_FUZZER 00:11:14.731 #define SPDK_CONFIG_FUZZER_LIB 00:11:14.731 #undef SPDK_CONFIG_GOLANG 00:11:14.731 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:14.731 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:14.731 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:14.731 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:14.731 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:14.731 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:14.731 #undef SPDK_CONFIG_HAVE_LZ4 00:11:14.731 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:14.731 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:14.731 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:14.731 #define SPDK_CONFIG_IDXD 1 00:11:14.731 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:14.731 #undef SPDK_CONFIG_IPSEC_MB 00:11:14.731 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:14.731 #define SPDK_CONFIG_ISAL 1 00:11:14.731 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:14.731 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:14.731 #define SPDK_CONFIG_LIBDIR 00:11:14.731 #undef SPDK_CONFIG_LTO 00:11:14.731 #define SPDK_CONFIG_MAX_LCORES 128 00:11:14.731 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:14.731 #define SPDK_CONFIG_NVME_CUSE 1 00:11:14.731 #undef SPDK_CONFIG_OCF 00:11:14.731 #define SPDK_CONFIG_OCF_PATH 00:11:14.731 #define SPDK_CONFIG_OPENSSL_PATH 00:11:14.731 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:14.731 #define SPDK_CONFIG_PGO_DIR 00:11:14.731 #undef SPDK_CONFIG_PGO_USE 00:11:14.731 #define SPDK_CONFIG_PREFIX /usr/local 00:11:14.731 #undef SPDK_CONFIG_RAID5F 00:11:14.731 #undef SPDK_CONFIG_RBD 00:11:14.731 #define SPDK_CONFIG_RDMA 1 00:11:14.731 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:14.731 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:14.731 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:14.731 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:14.731 #define SPDK_CONFIG_SHARED 1 00:11:14.731 #undef SPDK_CONFIG_SMA 00:11:14.731 #define SPDK_CONFIG_TESTS 1 00:11:14.731 #undef SPDK_CONFIG_TSAN 00:11:14.731 #define SPDK_CONFIG_UBLK 1 00:11:14.731 #define SPDK_CONFIG_UBSAN 1 00:11:14.731 #undef SPDK_CONFIG_UNIT_TESTS 00:11:14.731 #undef SPDK_CONFIG_URING 00:11:14.731 #define SPDK_CONFIG_URING_PATH 00:11:14.731 #undef SPDK_CONFIG_URING_ZNS 00:11:14.731 #undef SPDK_CONFIG_USDT 00:11:14.731 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:14.731 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:14.731 #define SPDK_CONFIG_VFIO_USER 1 00:11:14.731 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:14.731 #define SPDK_CONFIG_VHOST 1 00:11:14.731 #define SPDK_CONFIG_VIRTIO 1 00:11:14.731 #undef SPDK_CONFIG_VTUNE 00:11:14.731 #define SPDK_CONFIG_VTUNE_DIR 00:11:14.731 #define SPDK_CONFIG_WERROR 1 00:11:14.731 #define SPDK_CONFIG_WPDK_DIR 00:11:14.731 #undef SPDK_CONFIG_XNVME 00:11:14.731 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:14.731 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:14.732 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:11:14.733 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 947893 ]] 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 947893 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.Kxl1mR 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Kxl1mR/tests/target /tmp/spdk.Kxl1mR 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122522058752 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356550144 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6834491392 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668241920 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.734 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677474304 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678277120 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=802816 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:11:14.735 * Looking for test storage... 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122522058752 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9049083904 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:11:14.735 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.997 --rc genhtml_branch_coverage=1 00:11:14.997 --rc genhtml_function_coverage=1 00:11:14.997 --rc genhtml_legend=1 00:11:14.997 --rc geninfo_all_blocks=1 00:11:14.997 --rc geninfo_unexecuted_blocks=1 00:11:14.997 00:11:14.997 ' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.997 --rc genhtml_branch_coverage=1 00:11:14.997 --rc genhtml_function_coverage=1 00:11:14.997 --rc genhtml_legend=1 00:11:14.997 --rc geninfo_all_blocks=1 00:11:14.997 --rc geninfo_unexecuted_blocks=1 00:11:14.997 00:11:14.997 ' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.997 --rc genhtml_branch_coverage=1 00:11:14.997 --rc genhtml_function_coverage=1 00:11:14.997 --rc genhtml_legend=1 00:11:14.997 --rc geninfo_all_blocks=1 00:11:14.997 --rc geninfo_unexecuted_blocks=1 00:11:14.997 00:11:14.997 ' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:14.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.997 --rc genhtml_branch_coverage=1 00:11:14.997 --rc genhtml_function_coverage=1 00:11:14.997 --rc genhtml_legend=1 00:11:14.997 --rc geninfo_all_blocks=1 00:11:14.997 --rc geninfo_unexecuted_blocks=1 00:11:14.997 00:11:14.997 ' 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.997 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.998 11:34:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:23.153 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:23.154 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:23.154 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:23.154 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:23.154 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:23.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:23.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:11:23.154 00:11:23.154 --- 10.0.0.2 ping statistics --- 00:11:23.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.154 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:23.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:23.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:23.154 00:11:23.154 --- 10.0.0.1 ping statistics --- 00:11:23.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:23.154 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:23.154 ************************************ 00:11:23.154 START TEST nvmf_filesystem_no_in_capsule 00:11:23.154 ************************************ 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:23.154 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=951769 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 951769 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 951769 ']' 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:23.155 11:34:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.155 [2024-11-15 11:34:48.014364] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:11:23.155 [2024-11-15 11:34:48.014430] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:23.155 [2024-11-15 11:34:48.116808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.155 [2024-11-15 11:34:48.170372] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.155 [2024-11-15 11:34:48.170428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.155 [2024-11-15 11:34:48.170436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.155 [2024-11-15 11:34:48.170444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.155 [2024-11-15 11:34:48.170450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.155 [2024-11-15 11:34:48.172608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.155 [2024-11-15 11:34:48.172721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.155 [2024-11-15 11:34:48.172883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.155 [2024-11-15 11:34:48.172884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.416 [2024-11-15 11:34:48.890554] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.416 11:34:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.677 Malloc1 00:11:23.677 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.678 [2024-11-15 11:34:49.053829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:23.678 { 00:11:23.678 "name": "Malloc1", 00:11:23.678 "aliases": [ 00:11:23.678 "f63c719f-9361-439f-91b3-077b3ae6cfd1" 00:11:23.678 ], 00:11:23.678 "product_name": "Malloc disk", 00:11:23.678 "block_size": 512, 00:11:23.678 "num_blocks": 1048576, 00:11:23.678 "uuid": "f63c719f-9361-439f-91b3-077b3ae6cfd1", 00:11:23.678 "assigned_rate_limits": { 00:11:23.678 "rw_ios_per_sec": 0, 00:11:23.678 "rw_mbytes_per_sec": 0, 00:11:23.678 "r_mbytes_per_sec": 0, 00:11:23.678 "w_mbytes_per_sec": 0 00:11:23.678 }, 00:11:23.678 "claimed": true, 00:11:23.678 "claim_type": "exclusive_write", 00:11:23.678 "zoned": false, 00:11:23.678 "supported_io_types": { 00:11:23.678 "read": true, 00:11:23.678 "write": true, 00:11:23.678 "unmap": true, 00:11:23.678 "flush": true, 00:11:23.678 "reset": true, 00:11:23.678 "nvme_admin": false, 00:11:23.678 "nvme_io": false, 00:11:23.678 "nvme_io_md": false, 00:11:23.678 "write_zeroes": true, 00:11:23.678 "zcopy": true, 00:11:23.678 "get_zone_info": false, 00:11:23.678 "zone_management": false, 00:11:23.678 "zone_append": false, 00:11:23.678 "compare": false, 00:11:23.678 "compare_and_write": false, 00:11:23.678 "abort": true, 00:11:23.678 "seek_hole": false, 00:11:23.678 "seek_data": false, 00:11:23.678 "copy": true, 00:11:23.678 "nvme_iov_md": false 00:11:23.678 }, 00:11:23.678 "memory_domains": [ 00:11:23.678 { 00:11:23.678 "dma_device_id": "system", 00:11:23.678 "dma_device_type": 1 00:11:23.678 }, 00:11:23.678 { 00:11:23.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.678 "dma_device_type": 2 00:11:23.678 } 00:11:23.678 ], 00:11:23.678 "driver_specific": {} 00:11:23.678 } 00:11:23.678 ]' 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:23.678 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:23.938 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:23.938 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:23.938 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:23.938 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:23.938 11:34:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.322 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.322 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:25.322 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.322 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:25.322 11:34:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:27.229 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:27.488 11:34:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:28.056 11:34:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.434 ************************************ 00:11:29.434 START TEST filesystem_ext4 00:11:29.434 ************************************ 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:29.434 11:34:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:29.434 mke2fs 1.47.0 (5-Feb-2023) 00:11:29.434 Discarding device blocks: 0/522240 done 00:11:29.434 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:29.434 Filesystem UUID: 322fee4f-bc49-4133-92a0-bb7ec1e6bfdf 00:11:29.434 Superblock backups stored on blocks: 00:11:29.434 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:29.434 00:11:29.434 Allocating group tables: 0/64 done 00:11:29.434 Writing inode tables: 0/64 done 00:11:31.970 Creating journal (8192 blocks): done 00:11:31.970 Writing superblocks and filesystem accounting information: 0/64 done 00:11:31.970 00:11:31.970 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:31.970 11:34:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 951769 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:38.555 00:11:38.555 real 0m8.385s 00:11:38.555 user 0m0.036s 00:11:38.555 sys 0m0.075s 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 ************************************ 00:11:38.555 END TEST filesystem_ext4 00:11:38.555 ************************************ 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:38.555 11:35:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 ************************************ 00:11:38.555 START TEST filesystem_btrfs 00:11:38.555 ************************************ 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:38.556 btrfs-progs v6.8.1 00:11:38.556 See https://btrfs.readthedocs.io for more information. 00:11:38.556 00:11:38.556 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:38.556 NOTE: several default settings have changed in version 5.15, please make sure 00:11:38.556 this does not affect your deployments: 00:11:38.556 - DUP for metadata (-m dup) 00:11:38.556 - enabled no-holes (-O no-holes) 00:11:38.556 - enabled free-space-tree (-R free-space-tree) 00:11:38.556 00:11:38.556 Label: (null) 00:11:38.556 UUID: 4f338ffd-c680-4e68-b08d-f8691527c90f 00:11:38.556 Node size: 16384 00:11:38.556 Sector size: 4096 (CPU page size: 4096) 00:11:38.556 Filesystem size: 510.00MiB 00:11:38.556 Block group profiles: 00:11:38.556 Data: single 8.00MiB 00:11:38.556 Metadata: DUP 32.00MiB 00:11:38.556 System: DUP 8.00MiB 00:11:38.556 SSD detected: yes 00:11:38.556 Zoned device: no 00:11:38.556 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:38.556 Checksum: crc32c 00:11:38.556 Number of devices: 1 00:11:38.556 Devices: 00:11:38.556 ID SIZE PATH 00:11:38.556 1 510.00MiB /dev/nvme0n1p1 00:11:38.556 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:38.556 11:35:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 951769 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.125 00:11:39.125 real 0m1.440s 00:11:39.125 user 0m0.036s 00:11:39.125 sys 0m0.115s 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.125 ************************************ 00:11:39.125 END TEST filesystem_btrfs 00:11:39.125 ************************************ 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.125 ************************************ 00:11:39.125 START TEST filesystem_xfs 00:11:39.125 ************************************ 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:39.125 11:35:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:39.384 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:39.384 = sectsz=512 attr=2, projid32bit=1 00:11:39.384 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:39.384 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:39.384 data = bsize=4096 blocks=130560, imaxpct=25 00:11:39.384 = sunit=0 swidth=0 blks 00:11:39.384 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:39.384 log =internal log bsize=4096 blocks=16384, version=2 00:11:39.384 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:39.384 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:40.321 Discarding blocks...Done. 00:11:40.321 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:40.321 11:35:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 951769 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.230 00:11:42.230 real 0m2.761s 00:11:42.230 user 0m0.031s 00:11:42.230 sys 0m0.077s 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:42.230 ************************************ 00:11:42.230 END TEST filesystem_xfs 00:11:42.230 ************************************ 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:42.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:42.230 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:42.231 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 951769 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 951769 ']' 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 951769 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 951769 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 951769' 00:11:42.491 killing process with pid 951769 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 951769 00:11:42.491 11:35:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 951769 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:42.752 00:11:42.752 real 0m20.053s 00:11:42.752 user 1m19.195s 00:11:42.752 sys 0m1.508s 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.752 ************************************ 00:11:42.752 END TEST nvmf_filesystem_no_in_capsule 00:11:42.752 ************************************ 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:42.752 ************************************ 00:11:42.752 START TEST nvmf_filesystem_in_capsule 00:11:42.752 ************************************ 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=955793 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 955793 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 955793 ']' 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.752 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.752 [2024-11-15 11:35:08.146331] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:11:42.752 [2024-11-15 11:35:08.146384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.752 [2024-11-15 11:35:08.241221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:43.013 [2024-11-15 11:35:08.276060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.013 [2024-11-15 11:35:08.276090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.013 [2024-11-15 11:35:08.276097] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.013 [2024-11-15 11:35:08.276101] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.013 [2024-11-15 11:35:08.276106] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.013 [2024-11-15 11:35:08.277475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.013 [2024-11-15 11:35:08.277628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:43.013 [2024-11-15 11:35:08.277938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.013 [2024-11-15 11:35:08.277939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.582 11:35:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.582 [2024-11-15 11:35:08.998101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.582 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.582 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:43.582 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.582 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 Malloc1 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 [2024-11-15 11:35:09.128387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:43.842 { 00:11:43.842 "name": "Malloc1", 00:11:43.842 "aliases": [ 00:11:43.842 "557c3525-9258-45ab-967c-42d18083486c" 00:11:43.842 ], 00:11:43.842 "product_name": "Malloc disk", 00:11:43.842 "block_size": 512, 00:11:43.842 "num_blocks": 1048576, 00:11:43.842 "uuid": "557c3525-9258-45ab-967c-42d18083486c", 00:11:43.842 "assigned_rate_limits": { 00:11:43.842 "rw_ios_per_sec": 0, 00:11:43.842 "rw_mbytes_per_sec": 0, 00:11:43.842 "r_mbytes_per_sec": 0, 00:11:43.842 "w_mbytes_per_sec": 0 00:11:43.842 }, 00:11:43.842 "claimed": true, 00:11:43.842 "claim_type": "exclusive_write", 00:11:43.842 "zoned": false, 00:11:43.842 "supported_io_types": { 00:11:43.842 "read": true, 00:11:43.842 "write": true, 00:11:43.842 "unmap": true, 00:11:43.842 "flush": true, 00:11:43.842 "reset": true, 00:11:43.842 "nvme_admin": false, 00:11:43.842 "nvme_io": false, 00:11:43.842 "nvme_io_md": false, 00:11:43.842 "write_zeroes": true, 00:11:43.842 "zcopy": true, 00:11:43.842 "get_zone_info": false, 00:11:43.842 "zone_management": false, 00:11:43.842 "zone_append": false, 00:11:43.842 "compare": false, 00:11:43.842 "compare_and_write": false, 00:11:43.842 "abort": true, 00:11:43.842 "seek_hole": false, 00:11:43.842 "seek_data": false, 00:11:43.842 "copy": true, 00:11:43.842 "nvme_iov_md": false 00:11:43.842 }, 00:11:43.842 "memory_domains": [ 00:11:43.842 { 00:11:43.842 "dma_device_id": "system", 00:11:43.842 "dma_device_type": 1 00:11:43.842 }, 00:11:43.842 { 00:11:43.842 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.842 "dma_device_type": 2 00:11:43.842 } 00:11:43.842 ], 00:11:43.842 "driver_specific": {} 00:11:43.842 } 00:11:43.842 ]' 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:43.842 11:35:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:45.751 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:45.751 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:45.751 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:45.751 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:45.751 11:35:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:47.664 11:35:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:47.664 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:48.233 11:35:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:49.170 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:49.171 ************************************ 00:11:49.171 START TEST filesystem_in_capsule_ext4 00:11:49.171 ************************************ 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:49.171 11:35:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:49.171 mke2fs 1.47.0 (5-Feb-2023) 00:11:49.171 Discarding device blocks: 0/522240 done 00:11:49.171 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:49.171 Filesystem UUID: 573f7a6f-fce1-43eb-8602-14f3b7c47be3 00:11:49.171 Superblock backups stored on blocks: 00:11:49.171 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:49.171 00:11:49.171 Allocating group tables: 0/64 done 00:11:49.171 Writing inode tables: 0/64 done 00:11:52.461 Creating journal (8192 blocks): done 00:11:54.226 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:11:54.226 00:11:54.226 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:54.226 11:35:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 955793 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.798 00:12:00.798 real 0m10.845s 00:12:00.798 user 0m0.037s 00:12:00.798 sys 0m0.077s 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:00.798 ************************************ 00:12:00.798 END TEST filesystem_in_capsule_ext4 00:12:00.798 ************************************ 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.798 ************************************ 00:12:00.798 START TEST filesystem_in_capsule_btrfs 00:12:00.798 ************************************ 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:00.798 btrfs-progs v6.8.1 00:12:00.798 See https://btrfs.readthedocs.io for more information. 00:12:00.798 00:12:00.798 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:00.798 NOTE: several default settings have changed in version 5.15, please make sure 00:12:00.798 this does not affect your deployments: 00:12:00.798 - DUP for metadata (-m dup) 00:12:00.798 - enabled no-holes (-O no-holes) 00:12:00.798 - enabled free-space-tree (-R free-space-tree) 00:12:00.798 00:12:00.798 Label: (null) 00:12:00.798 UUID: 411ef3dc-8b5a-4e16-be09-c9d5baaa7729 00:12:00.798 Node size: 16384 00:12:00.798 Sector size: 4096 (CPU page size: 4096) 00:12:00.798 Filesystem size: 510.00MiB 00:12:00.798 Block group profiles: 00:12:00.798 Data: single 8.00MiB 00:12:00.798 Metadata: DUP 32.00MiB 00:12:00.798 System: DUP 8.00MiB 00:12:00.798 SSD detected: yes 00:12:00.798 Zoned device: no 00:12:00.798 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:00.798 Checksum: crc32c 00:12:00.798 Number of devices: 1 00:12:00.798 Devices: 00:12:00.798 ID SIZE PATH 00:12:00.798 1 510.00MiB /dev/nvme0n1p1 00:12:00.798 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:12:00.798 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 955793 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:00.799 00:12:00.799 real 0m0.564s 00:12:00.799 user 0m0.024s 00:12:00.799 sys 0m0.119s 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:00.799 11:35:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:00.799 ************************************ 00:12:00.799 END TEST filesystem_in_capsule_btrfs 00:12:00.799 ************************************ 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.799 ************************************ 00:12:00.799 START TEST filesystem_in_capsule_xfs 00:12:00.799 ************************************ 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:12:00.799 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:00.799 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:00.799 = sectsz=512 attr=2, projid32bit=1 00:12:00.799 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:00.799 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:00.799 data = bsize=4096 blocks=130560, imaxpct=25 00:12:00.799 = sunit=0 swidth=0 blks 00:12:00.799 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:00.799 log =internal log bsize=4096 blocks=16384, version=2 00:12:00.799 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:00.799 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:01.735 Discarding blocks...Done. 00:12:01.735 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:12:01.735 11:35:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 955793 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.270 00:12:04.270 real 0m3.380s 00:12:04.270 user 0m0.029s 00:12:04.270 sys 0m0.078s 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:04.270 ************************************ 00:12:04.270 END TEST filesystem_in_capsule_xfs 00:12:04.270 ************************************ 00:12:04.270 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:04.530 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:04.530 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.530 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.530 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:12:04.530 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:04.530 11:35:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.530 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 955793 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 955793 ']' 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 955793 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 955793 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 955793' 00:12:04.791 killing process with pid 955793 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 955793 00:12:04.791 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 955793 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:05.061 00:12:05.061 real 0m22.211s 00:12:05.061 user 1m27.909s 00:12:05.061 sys 0m1.499s 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.061 ************************************ 00:12:05.061 END TEST nvmf_filesystem_in_capsule 00:12:05.061 ************************************ 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.061 rmmod nvme_tcp 00:12:05.061 rmmod nvme_fabrics 00:12:05.061 rmmod nvme_keyring 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.061 11:35:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.611 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.611 00:12:07.612 real 0m52.667s 00:12:07.612 user 2m49.564s 00:12:07.612 sys 0m8.913s 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.612 ************************************ 00:12:07.612 END TEST nvmf_filesystem 00:12:07.612 ************************************ 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.612 ************************************ 00:12:07.612 START TEST nvmf_target_discovery 00:12:07.612 ************************************ 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:07.612 * Looking for test storage... 00:12:07.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.612 --rc genhtml_branch_coverage=1 00:12:07.612 --rc genhtml_function_coverage=1 00:12:07.612 --rc genhtml_legend=1 00:12:07.612 --rc geninfo_all_blocks=1 00:12:07.612 --rc geninfo_unexecuted_blocks=1 00:12:07.612 00:12:07.612 ' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.612 --rc genhtml_branch_coverage=1 00:12:07.612 --rc genhtml_function_coverage=1 00:12:07.612 --rc genhtml_legend=1 00:12:07.612 --rc geninfo_all_blocks=1 00:12:07.612 --rc geninfo_unexecuted_blocks=1 00:12:07.612 00:12:07.612 ' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.612 --rc genhtml_branch_coverage=1 00:12:07.612 --rc genhtml_function_coverage=1 00:12:07.612 --rc genhtml_legend=1 00:12:07.612 --rc geninfo_all_blocks=1 00:12:07.612 --rc geninfo_unexecuted_blocks=1 00:12:07.612 00:12:07.612 ' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:07.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.612 --rc genhtml_branch_coverage=1 00:12:07.612 --rc genhtml_function_coverage=1 00:12:07.612 --rc genhtml_legend=1 00:12:07.612 --rc geninfo_all_blocks=1 00:12:07.612 --rc geninfo_unexecuted_blocks=1 00:12:07.612 00:12:07.612 ' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.612 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.613 11:35:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:15.785 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.785 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:15.786 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:15.786 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:15.786 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.786 11:35:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:12:15.786 00:12:15.786 --- 10.0.0.2 ping statistics --- 00:12:15.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.786 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:12:15.786 00:12:15.786 --- 10.0.0.1 ping statistics --- 00:12:15.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.786 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=964634 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 964634 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 964634 ']' 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:15.786 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.786 [2024-11-15 11:35:40.171100] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:12:15.786 [2024-11-15 11:35:40.171154] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.787 [2024-11-15 11:35:40.268005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.787 [2024-11-15 11:35:40.303777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.787 [2024-11-15 11:35:40.303809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.787 [2024-11-15 11:35:40.303817] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.787 [2024-11-15 11:35:40.303824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.787 [2024-11-15 11:35:40.303830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.787 [2024-11-15 11:35:40.305334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.787 [2024-11-15 11:35:40.305488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.787 [2024-11-15 11:35:40.305625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.787 [2024-11-15 11:35:40.305840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.787 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:15.787 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:12:15.787 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.787 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.787 11:35:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 [2024-11-15 11:35:41.012709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 Null1 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 [2024-11-15 11:35:41.073054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 Null2 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 Null3 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 Null4 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.787 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.788 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:12:16.050 00:12:16.050 Discovery Log Number of Records 6, Generation counter 6 00:12:16.050 =====Discovery Log Entry 0====== 00:12:16.050 trtype: tcp 00:12:16.050 adrfam: ipv4 00:12:16.050 subtype: current discovery subsystem 00:12:16.050 treq: not required 00:12:16.050 portid: 0 00:12:16.050 trsvcid: 4420 00:12:16.050 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.050 traddr: 10.0.0.2 00:12:16.050 eflags: explicit discovery connections, duplicate discovery information 00:12:16.050 sectype: none 00:12:16.050 =====Discovery Log Entry 1====== 00:12:16.050 trtype: tcp 00:12:16.050 adrfam: ipv4 00:12:16.050 subtype: nvme subsystem 00:12:16.050 treq: not required 00:12:16.050 portid: 0 00:12:16.050 trsvcid: 4420 00:12:16.050 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:16.050 traddr: 10.0.0.2 00:12:16.050 eflags: none 00:12:16.050 sectype: none 00:12:16.050 =====Discovery Log Entry 2====== 00:12:16.050 trtype: tcp 00:12:16.050 adrfam: ipv4 00:12:16.050 subtype: nvme subsystem 00:12:16.050 treq: not required 00:12:16.050 portid: 0 00:12:16.050 trsvcid: 4420 00:12:16.050 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:16.050 traddr: 10.0.0.2 00:12:16.050 eflags: none 00:12:16.050 sectype: none 00:12:16.050 =====Discovery Log Entry 3====== 00:12:16.050 trtype: tcp 00:12:16.050 adrfam: ipv4 00:12:16.050 subtype: nvme subsystem 00:12:16.050 treq: not required 00:12:16.050 portid: 0 00:12:16.050 trsvcid: 4420 00:12:16.050 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:16.050 traddr: 10.0.0.2 00:12:16.050 eflags: none 00:12:16.050 sectype: none 00:12:16.050 =====Discovery Log Entry 4====== 00:12:16.050 trtype: tcp 00:12:16.050 adrfam: ipv4 00:12:16.050 subtype: nvme subsystem 00:12:16.050 treq: not required 00:12:16.050 portid: 0 00:12:16.050 trsvcid: 4420 00:12:16.050 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:16.050 traddr: 10.0.0.2 00:12:16.050 eflags: none 00:12:16.050 sectype: none 00:12:16.050 =====Discovery Log Entry 5====== 00:12:16.050 trtype: tcp 00:12:16.050 adrfam: ipv4 00:12:16.050 subtype: discovery subsystem referral 00:12:16.050 treq: not required 00:12:16.050 portid: 0 00:12:16.050 trsvcid: 4430 00:12:16.050 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:16.050 traddr: 10.0.0.2 00:12:16.050 eflags: none 00:12:16.050 sectype: none 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:16.050 Perform nvmf subsystem discovery via RPC 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.050 [ 00:12:16.050 { 00:12:16.050 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:16.050 "subtype": "Discovery", 00:12:16.050 "listen_addresses": [ 00:12:16.050 { 00:12:16.050 "trtype": "TCP", 00:12:16.050 "adrfam": "IPv4", 00:12:16.050 "traddr": "10.0.0.2", 00:12:16.050 "trsvcid": "4420" 00:12:16.050 } 00:12:16.050 ], 00:12:16.050 "allow_any_host": true, 00:12:16.050 "hosts": [] 00:12:16.050 }, 00:12:16.050 { 00:12:16.050 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:16.050 "subtype": "NVMe", 00:12:16.050 "listen_addresses": [ 00:12:16.050 { 00:12:16.050 "trtype": "TCP", 00:12:16.050 "adrfam": "IPv4", 00:12:16.050 "traddr": "10.0.0.2", 00:12:16.050 "trsvcid": "4420" 00:12:16.050 } 00:12:16.050 ], 00:12:16.050 "allow_any_host": true, 00:12:16.050 "hosts": [], 00:12:16.050 "serial_number": "SPDK00000000000001", 00:12:16.050 "model_number": "SPDK bdev Controller", 00:12:16.050 "max_namespaces": 32, 00:12:16.050 "min_cntlid": 1, 00:12:16.050 "max_cntlid": 65519, 00:12:16.050 "namespaces": [ 00:12:16.050 { 00:12:16.050 "nsid": 1, 00:12:16.050 "bdev_name": "Null1", 00:12:16.050 "name": "Null1", 00:12:16.050 "nguid": "1EBAB737FE924463800EAA4431D2C3A6", 00:12:16.050 "uuid": "1ebab737-fe92-4463-800e-aa4431d2c3a6" 00:12:16.050 } 00:12:16.050 ] 00:12:16.050 }, 00:12:16.050 { 00:12:16.050 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:16.050 "subtype": "NVMe", 00:12:16.050 "listen_addresses": [ 00:12:16.050 { 00:12:16.050 "trtype": "TCP", 00:12:16.050 "adrfam": "IPv4", 00:12:16.050 "traddr": "10.0.0.2", 00:12:16.050 "trsvcid": "4420" 00:12:16.050 } 00:12:16.050 ], 00:12:16.050 "allow_any_host": true, 00:12:16.050 "hosts": [], 00:12:16.050 "serial_number": "SPDK00000000000002", 00:12:16.050 "model_number": "SPDK bdev Controller", 00:12:16.050 "max_namespaces": 32, 00:12:16.050 "min_cntlid": 1, 00:12:16.050 "max_cntlid": 65519, 00:12:16.050 "namespaces": [ 00:12:16.050 { 00:12:16.050 "nsid": 1, 00:12:16.050 "bdev_name": "Null2", 00:12:16.050 "name": "Null2", 00:12:16.050 "nguid": "2F57CB6E434E4B60B9BA7FC3C62BA073", 00:12:16.050 "uuid": "2f57cb6e-434e-4b60-b9ba-7fc3c62ba073" 00:12:16.050 } 00:12:16.050 ] 00:12:16.050 }, 00:12:16.050 { 00:12:16.050 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:16.050 "subtype": "NVMe", 00:12:16.050 "listen_addresses": [ 00:12:16.050 { 00:12:16.050 "trtype": "TCP", 00:12:16.050 "adrfam": "IPv4", 00:12:16.050 "traddr": "10.0.0.2", 00:12:16.050 "trsvcid": "4420" 00:12:16.050 } 00:12:16.050 ], 00:12:16.050 "allow_any_host": true, 00:12:16.050 "hosts": [], 00:12:16.050 "serial_number": "SPDK00000000000003", 00:12:16.050 "model_number": "SPDK bdev Controller", 00:12:16.050 "max_namespaces": 32, 00:12:16.050 "min_cntlid": 1, 00:12:16.050 "max_cntlid": 65519, 00:12:16.050 "namespaces": [ 00:12:16.050 { 00:12:16.050 "nsid": 1, 00:12:16.050 "bdev_name": "Null3", 00:12:16.050 "name": "Null3", 00:12:16.050 "nguid": "E8418A9BA9034B6E8D03D05421CF570D", 00:12:16.050 "uuid": "e8418a9b-a903-4b6e-8d03-d05421cf570d" 00:12:16.050 } 00:12:16.050 ] 00:12:16.050 }, 00:12:16.050 { 00:12:16.050 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:16.050 "subtype": "NVMe", 00:12:16.050 "listen_addresses": [ 00:12:16.050 { 00:12:16.050 "trtype": "TCP", 00:12:16.050 "adrfam": "IPv4", 00:12:16.050 "traddr": "10.0.0.2", 00:12:16.050 "trsvcid": "4420" 00:12:16.050 } 00:12:16.050 ], 00:12:16.050 "allow_any_host": true, 00:12:16.050 "hosts": [], 00:12:16.050 "serial_number": "SPDK00000000000004", 00:12:16.050 "model_number": "SPDK bdev Controller", 00:12:16.050 "max_namespaces": 32, 00:12:16.050 "min_cntlid": 1, 00:12:16.050 "max_cntlid": 65519, 00:12:16.050 "namespaces": [ 00:12:16.050 { 00:12:16.050 "nsid": 1, 00:12:16.050 "bdev_name": "Null4", 00:12:16.050 "name": "Null4", 00:12:16.050 "nguid": "5BCCB1AC5DB145E6B630D508B0493D82", 00:12:16.050 "uuid": "5bccb1ac-5db1-45e6-b630-d508b0493d82" 00:12:16.050 } 00:12:16.050 ] 00:12:16.050 } 00:12:16.050 ] 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:16.050 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.051 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.313 rmmod nvme_tcp 00:12:16.313 rmmod nvme_fabrics 00:12:16.313 rmmod nvme_keyring 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 964634 ']' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 964634 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 964634 ']' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 964634 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 964634 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 964634' 00:12:16.313 killing process with pid 964634 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 964634 00:12:16.313 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 964634 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.575 11:35:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:18.490 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:18.490 00:12:18.490 real 0m11.376s 00:12:18.490 user 0m8.610s 00:12:18.490 sys 0m5.839s 00:12:18.490 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:18.490 11:35:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:18.490 ************************************ 00:12:18.490 END TEST nvmf_target_discovery 00:12:18.490 ************************************ 00:12:18.752 11:35:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.752 11:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:18.752 11:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:18.752 11:35:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:18.752 ************************************ 00:12:18.752 START TEST nvmf_referrals 00:12:18.752 ************************************ 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:18.752 * Looking for test storage... 00:12:18.752 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.752 --rc genhtml_branch_coverage=1 00:12:18.752 --rc genhtml_function_coverage=1 00:12:18.752 --rc genhtml_legend=1 00:12:18.752 --rc geninfo_all_blocks=1 00:12:18.752 --rc geninfo_unexecuted_blocks=1 00:12:18.752 00:12:18.752 ' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.752 --rc genhtml_branch_coverage=1 00:12:18.752 --rc genhtml_function_coverage=1 00:12:18.752 --rc genhtml_legend=1 00:12:18.752 --rc geninfo_all_blocks=1 00:12:18.752 --rc geninfo_unexecuted_blocks=1 00:12:18.752 00:12:18.752 ' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.752 --rc genhtml_branch_coverage=1 00:12:18.752 --rc genhtml_function_coverage=1 00:12:18.752 --rc genhtml_legend=1 00:12:18.752 --rc geninfo_all_blocks=1 00:12:18.752 --rc geninfo_unexecuted_blocks=1 00:12:18.752 00:12:18.752 ' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:18.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.752 --rc genhtml_branch_coverage=1 00:12:18.752 --rc genhtml_function_coverage=1 00:12:18.752 --rc genhtml_legend=1 00:12:18.752 --rc geninfo_all_blocks=1 00:12:18.752 --rc geninfo_unexecuted_blocks=1 00:12:18.752 00:12:18.752 ' 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.752 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:19.015 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.016 11:35:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.327 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.327 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.327 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.327 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:27.328 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:27.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:27.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:27.328 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.625 ms 00:12:27.328 00:12:27.328 --- 10.0.0.2 ping statistics --- 00:12:27.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.328 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:12:27.328 00:12:27.328 --- 10.0.0.1 ping statistics --- 00:12:27.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.328 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.328 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=969081 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 969081 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 969081 ']' 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:27.329 11:35:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 [2024-11-15 11:35:51.819329] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:12:27.329 [2024-11-15 11:35:51.819393] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.329 [2024-11-15 11:35:51.917751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.329 [2024-11-15 11:35:51.971046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.329 [2024-11-15 11:35:51.971095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.329 [2024-11-15 11:35:51.971110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.329 [2024-11-15 11:35:51.971117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.329 [2024-11-15 11:35:51.971123] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.329 [2024-11-15 11:35:51.973240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.329 [2024-11-15 11:35:51.973396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.329 [2024-11-15 11:35:51.973431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.329 [2024-11-15 11:35:51.973431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 [2024-11-15 11:35:52.679450] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 [2024-11-15 11:35:52.706805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:27.329 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.591 11:35:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.591 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.853 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.117 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.118 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.379 11:35:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:28.652 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:28.913 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:29.174 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:29.436 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:29.702 rmmod nvme_tcp 00:12:29.702 rmmod nvme_fabrics 00:12:29.702 rmmod nvme_keyring 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 969081 ']' 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 969081 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 969081 ']' 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 969081 00:12:29.702 11:35:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 969081 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 969081' 00:12:29.702 killing process with pid 969081 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 969081 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 969081 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:29.702 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:29.963 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.963 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.963 11:35:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:31.878 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:31.878 00:12:31.879 real 0m13.246s 00:12:31.879 user 0m15.962s 00:12:31.879 sys 0m6.502s 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:31.879 ************************************ 00:12:31.879 END TEST nvmf_referrals 00:12:31.879 ************************************ 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:31.879 ************************************ 00:12:31.879 START TEST nvmf_connect_disconnect 00:12:31.879 ************************************ 00:12:31.879 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:32.141 * Looking for test storage... 00:12:32.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.141 --rc genhtml_branch_coverage=1 00:12:32.141 --rc genhtml_function_coverage=1 00:12:32.141 --rc genhtml_legend=1 00:12:32.141 --rc geninfo_all_blocks=1 00:12:32.141 --rc geninfo_unexecuted_blocks=1 00:12:32.141 00:12:32.141 ' 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.141 --rc genhtml_branch_coverage=1 00:12:32.141 --rc genhtml_function_coverage=1 00:12:32.141 --rc genhtml_legend=1 00:12:32.141 --rc geninfo_all_blocks=1 00:12:32.141 --rc geninfo_unexecuted_blocks=1 00:12:32.141 00:12:32.141 ' 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.141 --rc genhtml_branch_coverage=1 00:12:32.141 --rc genhtml_function_coverage=1 00:12:32.141 --rc genhtml_legend=1 00:12:32.141 --rc geninfo_all_blocks=1 00:12:32.141 --rc geninfo_unexecuted_blocks=1 00:12:32.141 00:12:32.141 ' 00:12:32.141 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:32.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.141 --rc genhtml_branch_coverage=1 00:12:32.141 --rc genhtml_function_coverage=1 00:12:32.141 --rc genhtml_legend=1 00:12:32.141 --rc geninfo_all_blocks=1 00:12:32.141 --rc geninfo_unexecuted_blocks=1 00:12:32.141 00:12:32.141 ' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.142 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:32.142 11:35:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:40.302 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:40.302 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:40.302 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:40.302 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.302 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:40.303 11:36:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:40.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:12:40.303 00:12:40.303 --- 10.0.0.2 ping statistics --- 00:12:40.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.303 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:12:40.303 00:12:40.303 --- 10.0.0.1 ping statistics --- 00:12:40.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.303 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=974261 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 974261 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 974261 ']' 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:40.303 11:36:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.303 [2024-11-15 11:36:05.203680] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:12:40.303 [2024-11-15 11:36:05.203743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.303 [2024-11-15 11:36:05.304666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.303 [2024-11-15 11:36:05.357396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.303 [2024-11-15 11:36:05.357447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.303 [2024-11-15 11:36:05.357456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.303 [2024-11-15 11:36:05.357464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.303 [2024-11-15 11:36:05.357470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.303 [2024-11-15 11:36:05.359942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.303 [2024-11-15 11:36:05.360103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.303 [2024-11-15 11:36:05.360265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.303 [2024-11-15 11:36:05.360266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.565 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:40.565 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:40.565 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.565 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:40.565 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.826 [2024-11-15 11:36:06.085734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:40.826 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:40.827 [2024-11-15 11:36:06.164878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:40.827 11:36:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:45.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.825 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.121 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:59.121 rmmod nvme_tcp 00:12:59.121 rmmod nvme_fabrics 00:12:59.121 rmmod nvme_keyring 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 974261 ']' 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 974261 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 974261 ']' 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 974261 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 974261 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 974261' 00:12:59.121 killing process with pid 974261 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 974261 00:12:59.121 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 974261 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:59.395 11:36:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:01.305 00:13:01.305 real 0m29.373s 00:13:01.305 user 1m18.972s 00:13:01.305 sys 0m7.201s 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:01.305 ************************************ 00:13:01.305 END TEST nvmf_connect_disconnect 00:13:01.305 ************************************ 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:01.305 11:36:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:01.566 ************************************ 00:13:01.566 START TEST nvmf_multitarget 00:13:01.566 ************************************ 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:01.566 * Looking for test storage... 00:13:01.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:13:01.566 11:36:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.566 --rc genhtml_branch_coverage=1 00:13:01.566 --rc genhtml_function_coverage=1 00:13:01.566 --rc genhtml_legend=1 00:13:01.566 --rc geninfo_all_blocks=1 00:13:01.566 --rc geninfo_unexecuted_blocks=1 00:13:01.566 00:13:01.566 ' 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.566 --rc genhtml_branch_coverage=1 00:13:01.566 --rc genhtml_function_coverage=1 00:13:01.566 --rc genhtml_legend=1 00:13:01.566 --rc geninfo_all_blocks=1 00:13:01.566 --rc geninfo_unexecuted_blocks=1 00:13:01.566 00:13:01.566 ' 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.566 --rc genhtml_branch_coverage=1 00:13:01.566 --rc genhtml_function_coverage=1 00:13:01.566 --rc genhtml_legend=1 00:13:01.566 --rc geninfo_all_blocks=1 00:13:01.566 --rc geninfo_unexecuted_blocks=1 00:13:01.566 00:13:01.566 ' 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:01.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.566 --rc genhtml_branch_coverage=1 00:13:01.566 --rc genhtml_function_coverage=1 00:13:01.566 --rc genhtml_legend=1 00:13:01.566 --rc geninfo_all_blocks=1 00:13:01.566 --rc geninfo_unexecuted_blocks=1 00:13:01.566 00:13:01.566 ' 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.566 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.567 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:13:01.567 11:36:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:09.707 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:09.707 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:09.707 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:09.707 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:09.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:13:09.707 00:13:09.707 --- 10.0.0.2 ping statistics --- 00:13:09.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.707 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:13:09.707 00:13:09.707 --- 10.0.0.1 ping statistics --- 00:13:09.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.707 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:13:09.707 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=982677 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 982677 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 982677 ']' 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:09.708 11:36:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:09.708 [2024-11-15 11:36:34.657924] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:13:09.708 [2024-11-15 11:36:34.657992] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.708 [2024-11-15 11:36:34.758296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.708 [2024-11-15 11:36:34.812111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.708 [2024-11-15 11:36:34.812168] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.708 [2024-11-15 11:36:34.812176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.708 [2024-11-15 11:36:34.812184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.708 [2024-11-15 11:36:34.812190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.708 [2024-11-15 11:36:34.814598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.708 [2024-11-15 11:36:34.814698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.708 [2024-11-15 11:36:34.815013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.708 [2024-11-15 11:36:34.815015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:10.279 "nvmf_tgt_1" 00:13:10.279 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:10.539 "nvmf_tgt_2" 00:13:10.539 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.540 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:10.540 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:10.540 11:36:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:10.800 true 00:13:10.800 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:10.800 true 00:13:10.800 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:10.800 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:11.061 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:11.061 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:11.061 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:11.061 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:11.061 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:13:11.061 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:11.062 rmmod nvme_tcp 00:13:11.062 rmmod nvme_fabrics 00:13:11.062 rmmod nvme_keyring 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 982677 ']' 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 982677 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 982677 ']' 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 982677 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 982677 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 982677' 00:13:11.062 killing process with pid 982677 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 982677 00:13:11.062 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 982677 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:11.322 11:36:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.234 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:13.495 00:13:13.495 real 0m11.926s 00:13:13.495 user 0m10.392s 00:13:13.495 sys 0m6.168s 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:13.495 ************************************ 00:13:13.495 END TEST nvmf_multitarget 00:13:13.495 ************************************ 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:13.495 ************************************ 00:13:13.495 START TEST nvmf_rpc 00:13:13.495 ************************************ 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:13.495 * Looking for test storage... 00:13:13.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:13:13.495 11:36:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:13:13.756 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:13.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.757 --rc genhtml_branch_coverage=1 00:13:13.757 --rc genhtml_function_coverage=1 00:13:13.757 --rc genhtml_legend=1 00:13:13.757 --rc geninfo_all_blocks=1 00:13:13.757 --rc geninfo_unexecuted_blocks=1 00:13:13.757 00:13:13.757 ' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:13.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.757 --rc genhtml_branch_coverage=1 00:13:13.757 --rc genhtml_function_coverage=1 00:13:13.757 --rc genhtml_legend=1 00:13:13.757 --rc geninfo_all_blocks=1 00:13:13.757 --rc geninfo_unexecuted_blocks=1 00:13:13.757 00:13:13.757 ' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:13.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.757 --rc genhtml_branch_coverage=1 00:13:13.757 --rc genhtml_function_coverage=1 00:13:13.757 --rc genhtml_legend=1 00:13:13.757 --rc geninfo_all_blocks=1 00:13:13.757 --rc geninfo_unexecuted_blocks=1 00:13:13.757 00:13:13.757 ' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:13.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:13.757 --rc genhtml_branch_coverage=1 00:13:13.757 --rc genhtml_function_coverage=1 00:13:13.757 --rc genhtml_legend=1 00:13:13.757 --rc geninfo_all_blocks=1 00:13:13.757 --rc geninfo_unexecuted_blocks=1 00:13:13.757 00:13:13.757 ' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:13.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:13:13.757 11:36:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.899 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:21.900 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:21.900 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:21.900 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:21.900 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:21.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:13:21.900 00:13:21.900 --- 10.0.0.2 ping statistics --- 00:13:21.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.900 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:13:21.900 00:13:21.900 --- 10.0.0.1 ping statistics --- 00:13:21.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.900 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=987261 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 987261 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 987261 ']' 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:21.900 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.901 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:21.901 11:36:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:21.901 [2024-11-15 11:36:46.686389] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:13:21.901 [2024-11-15 11:36:46.686456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:21.901 [2024-11-15 11:36:46.791421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.901 [2024-11-15 11:36:46.844197] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.901 [2024-11-15 11:36:46.844250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.901 [2024-11-15 11:36:46.844258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.901 [2024-11-15 11:36:46.844266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.901 [2024-11-15 11:36:46.844272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.901 [2024-11-15 11:36:46.846693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.901 [2024-11-15 11:36:46.846853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.901 [2024-11-15 11:36:46.847014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.901 [2024-11-15 11:36:46.847014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:22.162 "tick_rate": 2400000000, 00:13:22.162 "poll_groups": [ 00:13:22.162 { 00:13:22.162 "name": "nvmf_tgt_poll_group_000", 00:13:22.162 "admin_qpairs": 0, 00:13:22.162 "io_qpairs": 0, 00:13:22.162 "current_admin_qpairs": 0, 00:13:22.162 "current_io_qpairs": 0, 00:13:22.162 "pending_bdev_io": 0, 00:13:22.162 "completed_nvme_io": 0, 00:13:22.162 "transports": [] 00:13:22.162 }, 00:13:22.162 { 00:13:22.162 "name": "nvmf_tgt_poll_group_001", 00:13:22.162 "admin_qpairs": 0, 00:13:22.162 "io_qpairs": 0, 00:13:22.162 "current_admin_qpairs": 0, 00:13:22.162 "current_io_qpairs": 0, 00:13:22.162 "pending_bdev_io": 0, 00:13:22.162 "completed_nvme_io": 0, 00:13:22.162 "transports": [] 00:13:22.162 }, 00:13:22.162 { 00:13:22.162 "name": "nvmf_tgt_poll_group_002", 00:13:22.162 "admin_qpairs": 0, 00:13:22.162 "io_qpairs": 0, 00:13:22.162 "current_admin_qpairs": 0, 00:13:22.162 "current_io_qpairs": 0, 00:13:22.162 "pending_bdev_io": 0, 00:13:22.162 "completed_nvme_io": 0, 00:13:22.162 "transports": [] 00:13:22.162 }, 00:13:22.162 { 00:13:22.162 "name": "nvmf_tgt_poll_group_003", 00:13:22.162 "admin_qpairs": 0, 00:13:22.162 "io_qpairs": 0, 00:13:22.162 "current_admin_qpairs": 0, 00:13:22.162 "current_io_qpairs": 0, 00:13:22.162 "pending_bdev_io": 0, 00:13:22.162 "completed_nvme_io": 0, 00:13:22.162 "transports": [] 00:13:22.162 } 00:13:22.162 ] 00:13:22.162 }' 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:22.162 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.424 [2024-11-15 11:36:47.689779] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:22.424 "tick_rate": 2400000000, 00:13:22.424 "poll_groups": [ 00:13:22.424 { 00:13:22.424 "name": "nvmf_tgt_poll_group_000", 00:13:22.424 "admin_qpairs": 0, 00:13:22.424 "io_qpairs": 0, 00:13:22.424 "current_admin_qpairs": 0, 00:13:22.424 "current_io_qpairs": 0, 00:13:22.424 "pending_bdev_io": 0, 00:13:22.424 "completed_nvme_io": 0, 00:13:22.424 "transports": [ 00:13:22.424 { 00:13:22.424 "trtype": "TCP" 00:13:22.424 } 00:13:22.424 ] 00:13:22.424 }, 00:13:22.424 { 00:13:22.424 "name": "nvmf_tgt_poll_group_001", 00:13:22.424 "admin_qpairs": 0, 00:13:22.424 "io_qpairs": 0, 00:13:22.424 "current_admin_qpairs": 0, 00:13:22.424 "current_io_qpairs": 0, 00:13:22.424 "pending_bdev_io": 0, 00:13:22.424 "completed_nvme_io": 0, 00:13:22.424 "transports": [ 00:13:22.424 { 00:13:22.424 "trtype": "TCP" 00:13:22.424 } 00:13:22.424 ] 00:13:22.424 }, 00:13:22.424 { 00:13:22.424 "name": "nvmf_tgt_poll_group_002", 00:13:22.424 "admin_qpairs": 0, 00:13:22.424 "io_qpairs": 0, 00:13:22.424 "current_admin_qpairs": 0, 00:13:22.424 "current_io_qpairs": 0, 00:13:22.424 "pending_bdev_io": 0, 00:13:22.424 "completed_nvme_io": 0, 00:13:22.424 "transports": [ 00:13:22.424 { 00:13:22.424 "trtype": "TCP" 00:13:22.424 } 00:13:22.424 ] 00:13:22.424 }, 00:13:22.424 { 00:13:22.424 "name": "nvmf_tgt_poll_group_003", 00:13:22.424 "admin_qpairs": 0, 00:13:22.424 "io_qpairs": 0, 00:13:22.424 "current_admin_qpairs": 0, 00:13:22.424 "current_io_qpairs": 0, 00:13:22.424 "pending_bdev_io": 0, 00:13:22.424 "completed_nvme_io": 0, 00:13:22.424 "transports": [ 00:13:22.424 { 00:13:22.424 "trtype": "TCP" 00:13:22.424 } 00:13:22.424 ] 00:13:22.424 } 00:13:22.424 ] 00:13:22.424 }' 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:22.424 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.425 Malloc1 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.425 [2024-11-15 11:36:47.899849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:22.425 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:13:22.686 [2024-11-15 11:36:47.936927] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:22.686 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:22.686 could not add new controller: failed to write to nvme-fabrics device 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.686 11:36:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.071 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.071 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:24.071 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.071 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:24.071 11:36:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:26.611 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:26.612 [2024-11-15 11:36:51.713385] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:13:26.612 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:26.612 could not add new controller: failed to write to nvme-fabrics device 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.612 11:36:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.996 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.996 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:27.996 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.996 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:27.996 11:36:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:29.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:29.904 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:29.905 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:29.905 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:29.905 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.165 [2024-11-15 11:36:55.450358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.165 11:36:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.547 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.547 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:31.547 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.547 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:31.547 11:36:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.088 [2024-11-15 11:36:59.205723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.088 11:36:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.472 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.472 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:35.472 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.472 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:35.472 11:37:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.384 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.644 [2024-11-15 11:37:02.919620] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.644 11:37:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:39.026 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:39.026 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:39.026 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:39.026 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:39.026 11:37:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:41.570 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.571 [2024-11-15 11:37:06.647416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.571 11:37:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.953 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.953 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:42.954 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.954 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:42.954 11:37:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.862 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.121 [2024-11-15 11:37:10.396662] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.121 11:37:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.503 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.503 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:46.503 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.503 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:46.503 11:37:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:49.048 11:37:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:49.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 [2024-11-15 11:37:14.128220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:49.048 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 [2024-11-15 11:37:14.200392] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 [2024-11-15 11:37:14.264566] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 [2024-11-15 11:37:14.336800] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 [2024-11-15 11:37:14.405023] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.049 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:49.049 "tick_rate": 2400000000, 00:13:49.049 "poll_groups": [ 00:13:49.049 { 00:13:49.049 "name": "nvmf_tgt_poll_group_000", 00:13:49.049 "admin_qpairs": 0, 00:13:49.050 "io_qpairs": 224, 00:13:49.050 "current_admin_qpairs": 0, 00:13:49.050 "current_io_qpairs": 0, 00:13:49.050 "pending_bdev_io": 0, 00:13:49.050 "completed_nvme_io": 520, 00:13:49.050 "transports": [ 00:13:49.050 { 00:13:49.050 "trtype": "TCP" 00:13:49.050 } 00:13:49.050 ] 00:13:49.050 }, 00:13:49.050 { 00:13:49.050 "name": "nvmf_tgt_poll_group_001", 00:13:49.050 "admin_qpairs": 1, 00:13:49.050 "io_qpairs": 223, 00:13:49.050 "current_admin_qpairs": 0, 00:13:49.050 "current_io_qpairs": 0, 00:13:49.050 "pending_bdev_io": 0, 00:13:49.050 "completed_nvme_io": 227, 00:13:49.050 "transports": [ 00:13:49.050 { 00:13:49.050 "trtype": "TCP" 00:13:49.050 } 00:13:49.050 ] 00:13:49.050 }, 00:13:49.050 { 00:13:49.050 "name": "nvmf_tgt_poll_group_002", 00:13:49.050 "admin_qpairs": 6, 00:13:49.050 "io_qpairs": 218, 00:13:49.050 "current_admin_qpairs": 0, 00:13:49.050 "current_io_qpairs": 0, 00:13:49.050 "pending_bdev_io": 0, 00:13:49.050 "completed_nvme_io": 218, 00:13:49.050 "transports": [ 00:13:49.050 { 00:13:49.050 "trtype": "TCP" 00:13:49.050 } 00:13:49.050 ] 00:13:49.050 }, 00:13:49.050 { 00:13:49.050 "name": "nvmf_tgt_poll_group_003", 00:13:49.050 "admin_qpairs": 0, 00:13:49.050 "io_qpairs": 224, 00:13:49.050 "current_admin_qpairs": 0, 00:13:49.050 "current_io_qpairs": 0, 00:13:49.050 "pending_bdev_io": 0, 00:13:49.050 "completed_nvme_io": 274, 00:13:49.050 "transports": [ 00:13:49.050 { 00:13:49.050 "trtype": "TCP" 00:13:49.050 } 00:13:49.050 ] 00:13:49.050 } 00:13:49.050 ] 00:13:49.050 }' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:49.050 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:49.310 rmmod nvme_tcp 00:13:49.310 rmmod nvme_fabrics 00:13:49.310 rmmod nvme_keyring 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 987261 ']' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 987261 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 987261 ']' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 987261 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 987261 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 987261' 00:13:49.310 killing process with pid 987261 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 987261 00:13:49.310 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 987261 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:49.572 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:49.573 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:49.573 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:49.573 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.573 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.573 11:37:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:51.483 00:13:51.483 real 0m38.085s 00:13:51.483 user 1m53.792s 00:13:51.483 sys 0m8.020s 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:51.483 ************************************ 00:13:51.483 END TEST nvmf_rpc 00:13:51.483 ************************************ 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.483 11:37:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:51.744 ************************************ 00:13:51.744 START TEST nvmf_invalid 00:13:51.744 ************************************ 00:13:51.744 11:37:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:51.744 * Looking for test storage... 00:13:51.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:51.744 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.745 --rc genhtml_branch_coverage=1 00:13:51.745 --rc genhtml_function_coverage=1 00:13:51.745 --rc genhtml_legend=1 00:13:51.745 --rc geninfo_all_blocks=1 00:13:51.745 --rc geninfo_unexecuted_blocks=1 00:13:51.745 00:13:51.745 ' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.745 --rc genhtml_branch_coverage=1 00:13:51.745 --rc genhtml_function_coverage=1 00:13:51.745 --rc genhtml_legend=1 00:13:51.745 --rc geninfo_all_blocks=1 00:13:51.745 --rc geninfo_unexecuted_blocks=1 00:13:51.745 00:13:51.745 ' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.745 --rc genhtml_branch_coverage=1 00:13:51.745 --rc genhtml_function_coverage=1 00:13:51.745 --rc genhtml_legend=1 00:13:51.745 --rc geninfo_all_blocks=1 00:13:51.745 --rc geninfo_unexecuted_blocks=1 00:13:51.745 00:13:51.745 ' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:51.745 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:51.745 --rc genhtml_branch_coverage=1 00:13:51.745 --rc genhtml_function_coverage=1 00:13:51.745 --rc genhtml_legend=1 00:13:51.745 --rc geninfo_all_blocks=1 00:13:51.745 --rc geninfo_unexecuted_blocks=1 00:13:51.745 00:13:51.745 ' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:51.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:51.745 11:37:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.912 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.912 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:59.913 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:59.913 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:59.913 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:59.913 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:59.913 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:59.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:13:59.914 00:13:59.914 --- 10.0.0.2 ping statistics --- 00:13:59.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.914 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:59.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:13:59.914 00:13:59.914 --- 10.0.0.1 ping statistics --- 00:13:59.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.914 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=997126 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 997126 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 997126 ']' 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:59.914 11:37:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:59.914 [2024-11-15 11:37:24.854421] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:13:59.914 [2024-11-15 11:37:24.854486] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.914 [2024-11-15 11:37:24.954222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.914 [2024-11-15 11:37:25.007110] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.914 [2024-11-15 11:37:25.007160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.914 [2024-11-15 11:37:25.007168] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.914 [2024-11-15 11:37:25.007175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.914 [2024-11-15 11:37:25.007182] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.914 [2024-11-15 11:37:25.009308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.914 [2024-11-15 11:37:25.009472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.914 [2024-11-15 11:37:25.009632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.914 [2024-11-15 11:37:25.009632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2288 00:14:00.486 [2024-11-15 11:37:25.899899] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:00.486 { 00:14:00.486 "nqn": "nqn.2016-06.io.spdk:cnode2288", 00:14:00.486 "tgt_name": "foobar", 00:14:00.486 "method": "nvmf_create_subsystem", 00:14:00.486 "req_id": 1 00:14:00.486 } 00:14:00.486 Got JSON-RPC error response 00:14:00.486 response: 00:14:00.486 { 00:14:00.486 "code": -32603, 00:14:00.486 "message": "Unable to find target foobar" 00:14:00.486 }' 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:00.486 { 00:14:00.486 "nqn": "nqn.2016-06.io.spdk:cnode2288", 00:14:00.486 "tgt_name": "foobar", 00:14:00.486 "method": "nvmf_create_subsystem", 00:14:00.486 "req_id": 1 00:14:00.486 } 00:14:00.486 Got JSON-RPC error response 00:14:00.486 response: 00:14:00.486 { 00:14:00.486 "code": -32603, 00:14:00.486 "message": "Unable to find target foobar" 00:14:00.486 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:00.486 11:37:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode2857 00:14:00.769 [2024-11-15 11:37:26.128853] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2857: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:00.769 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:00.769 { 00:14:00.769 "nqn": "nqn.2016-06.io.spdk:cnode2857", 00:14:00.769 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.769 "method": "nvmf_create_subsystem", 00:14:00.769 "req_id": 1 00:14:00.769 } 00:14:00.769 Got JSON-RPC error response 00:14:00.769 response: 00:14:00.769 { 00:14:00.769 "code": -32602, 00:14:00.769 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.769 }' 00:14:00.769 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:00.769 { 00:14:00.769 "nqn": "nqn.2016-06.io.spdk:cnode2857", 00:14:00.769 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:00.769 "method": "nvmf_create_subsystem", 00:14:00.769 "req_id": 1 00:14:00.769 } 00:14:00.769 Got JSON-RPC error response 00:14:00.769 response: 00:14:00.769 { 00:14:00.769 "code": -32602, 00:14:00.769 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:00.769 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:00.769 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:00.769 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode20827 00:14:01.041 [2024-11-15 11:37:26.341611] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20827: invalid model number 'SPDK_Controller' 00:14:01.041 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:01.041 { 00:14:01.041 "nqn": "nqn.2016-06.io.spdk:cnode20827", 00:14:01.042 "model_number": "SPDK_Controller\u001f", 00:14:01.042 "method": "nvmf_create_subsystem", 00:14:01.042 "req_id": 1 00:14:01.042 } 00:14:01.042 Got JSON-RPC error response 00:14:01.042 response: 00:14:01.042 { 00:14:01.042 "code": -32602, 00:14:01.042 "message": "Invalid MN SPDK_Controller\u001f" 00:14:01.042 }' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:01.042 { 00:14:01.042 "nqn": "nqn.2016-06.io.spdk:cnode20827", 00:14:01.042 "model_number": "SPDK_Controller\u001f", 00:14:01.042 "method": "nvmf_create_subsystem", 00:14:01.042 "req_id": 1 00:14:01.042 } 00:14:01.042 Got JSON-RPC error response 00:14:01.042 response: 00:14:01.042 { 00:14:01.042 "code": -32602, 00:14:01.042 "message": "Invalid MN SPDK_Controller\u001f" 00:14:01.042 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.042 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:01.043 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:01.043 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:01.043 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.043 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.043 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[  == \- ]] 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '/~\4L|@le7~osfloww*"' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '/~\4L|@le7~osfloww*"' nqn.2016-06.io.spdk:cnode21839 00:14:01.320 [2024-11-15 11:37:26.723041] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21839: invalid serial number '/~\4L|@le7~osfloww*"' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:14:01.320 { 00:14:01.320 "nqn": "nqn.2016-06.io.spdk:cnode21839", 00:14:01.320 "serial_number": "\u007f/~\\4L|@le7~osfloww*\"", 00:14:01.320 "method": "nvmf_create_subsystem", 00:14:01.320 "req_id": 1 00:14:01.320 } 00:14:01.320 Got JSON-RPC error response 00:14:01.320 response: 00:14:01.320 { 00:14:01.320 "code": -32602, 00:14:01.320 "message": "Invalid SN \u007f/~\\4L|@le7~osfloww*\"" 00:14:01.320 }' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:14:01.320 { 00:14:01.320 "nqn": "nqn.2016-06.io.spdk:cnode21839", 00:14:01.320 "serial_number": "\u007f/~\\4L|@le7~osfloww*\"", 00:14:01.320 "method": "nvmf_create_subsystem", 00:14:01.320 "req_id": 1 00:14:01.320 } 00:14:01.320 Got JSON-RPC error response 00:14:01.320 response: 00:14:01.320 { 00:14:01.320 "code": -32602, 00:14:01.320 "message": "Invalid SN \u007f/~\\4L|@le7~osfloww*\"" 00:14:01.320 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.320 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:14:01.321 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.603 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:14:01.604 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ + == \- ]] 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '+6L/WA]Y?rE@zU/V~g_O/)EDY"\\dre(4J-Y3@bJ' 00:14:01.605 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '+6L/WA]Y?rE@zU/V~g_O/)EDY"\\dre(4J-Y3@bJ' nqn.2016-06.io.spdk:cnode475 00:14:01.879 [2024-11-15 11:37:27.236778] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode475: invalid model number '+6L/WA]Y?rE@zU/V~g_O/)EDY"\\dre(4J-Y3@bJ' 00:14:01.879 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:14:01.879 { 00:14:01.879 "nqn": "nqn.2016-06.io.spdk:cnode475", 00:14:01.879 "model_number": "+6L/WA]Y?rE@zU/V~\u007fg_O/)EDY\"\\\\dre(4J-Y3@bJ", 00:14:01.879 "method": "nvmf_create_subsystem", 00:14:01.879 "req_id": 1 00:14:01.879 } 00:14:01.879 Got JSON-RPC error response 00:14:01.879 response: 00:14:01.879 { 00:14:01.879 "code": -32602, 00:14:01.879 "message": "Invalid MN +6L/WA]Y?rE@zU/V~\u007fg_O/)EDY\"\\\\dre(4J-Y3@bJ" 00:14:01.879 }' 00:14:01.879 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:14:01.879 { 00:14:01.879 "nqn": "nqn.2016-06.io.spdk:cnode475", 00:14:01.879 "model_number": "+6L/WA]Y?rE@zU/V~\u007fg_O/)EDY\"\\\\dre(4J-Y3@bJ", 00:14:01.879 "method": "nvmf_create_subsystem", 00:14:01.879 "req_id": 1 00:14:01.879 } 00:14:01.879 Got JSON-RPC error response 00:14:01.879 response: 00:14:01.879 { 00:14:01.879 "code": -32602, 00:14:01.879 "message": "Invalid MN +6L/WA]Y?rE@zU/V~\u007fg_O/)EDY\"\\\\dre(4J-Y3@bJ" 00:14:01.879 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:01.879 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:14:02.140 [2024-11-15 11:37:27.425475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.140 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:14:02.140 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:14:02.140 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:14:02.140 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:14:02.140 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:14:02.400 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:14:02.400 [2024-11-15 11:37:27.791579] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:14:02.400 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:14:02.400 { 00:14:02.400 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:02.400 "listen_address": { 00:14:02.400 "trtype": "tcp", 00:14:02.400 "traddr": "", 00:14:02.400 "trsvcid": "4421" 00:14:02.400 }, 00:14:02.400 "method": "nvmf_subsystem_remove_listener", 00:14:02.400 "req_id": 1 00:14:02.400 } 00:14:02.400 Got JSON-RPC error response 00:14:02.400 response: 00:14:02.400 { 00:14:02.400 "code": -32602, 00:14:02.400 "message": "Invalid parameters" 00:14:02.400 }' 00:14:02.400 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:14:02.400 { 00:14:02.400 "nqn": "nqn.2016-06.io.spdk:cnode", 00:14:02.400 "listen_address": { 00:14:02.400 "trtype": "tcp", 00:14:02.400 "traddr": "", 00:14:02.400 "trsvcid": "4421" 00:14:02.400 }, 00:14:02.400 "method": "nvmf_subsystem_remove_listener", 00:14:02.400 "req_id": 1 00:14:02.400 } 00:14:02.400 Got JSON-RPC error response 00:14:02.400 response: 00:14:02.400 { 00:14:02.400 "code": -32602, 00:14:02.400 "message": "Invalid parameters" 00:14:02.400 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:14:02.400 11:37:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8851 -i 0 00:14:02.659 [2024-11-15 11:37:27.980135] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8851: invalid cntlid range [0-65519] 00:14:02.660 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:14:02.660 { 00:14:02.660 "nqn": "nqn.2016-06.io.spdk:cnode8851", 00:14:02.660 "min_cntlid": 0, 00:14:02.660 "method": "nvmf_create_subsystem", 00:14:02.660 "req_id": 1 00:14:02.660 } 00:14:02.660 Got JSON-RPC error response 00:14:02.660 response: 00:14:02.660 { 00:14:02.660 "code": -32602, 00:14:02.660 "message": "Invalid cntlid range [0-65519]" 00:14:02.660 }' 00:14:02.660 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:14:02.660 { 00:14:02.660 "nqn": "nqn.2016-06.io.spdk:cnode8851", 00:14:02.660 "min_cntlid": 0, 00:14:02.660 "method": "nvmf_create_subsystem", 00:14:02.660 "req_id": 1 00:14:02.660 } 00:14:02.660 Got JSON-RPC error response 00:14:02.660 response: 00:14:02.660 { 00:14:02.660 "code": -32602, 00:14:02.660 "message": "Invalid cntlid range [0-65519]" 00:14:02.660 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.660 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26946 -i 65520 00:14:02.920 [2024-11-15 11:37:28.168791] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26946: invalid cntlid range [65520-65519] 00:14:02.920 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:14:02.920 { 00:14:02.920 "nqn": "nqn.2016-06.io.spdk:cnode26946", 00:14:02.920 "min_cntlid": 65520, 00:14:02.920 "method": "nvmf_create_subsystem", 00:14:02.920 "req_id": 1 00:14:02.920 } 00:14:02.920 Got JSON-RPC error response 00:14:02.920 response: 00:14:02.920 { 00:14:02.920 "code": -32602, 00:14:02.920 "message": "Invalid cntlid range [65520-65519]" 00:14:02.920 }' 00:14:02.920 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:14:02.920 { 00:14:02.920 "nqn": "nqn.2016-06.io.spdk:cnode26946", 00:14:02.920 "min_cntlid": 65520, 00:14:02.920 "method": "nvmf_create_subsystem", 00:14:02.920 "req_id": 1 00:14:02.920 } 00:14:02.920 Got JSON-RPC error response 00:14:02.920 response: 00:14:02.920 { 00:14:02.920 "code": -32602, 00:14:02.920 "message": "Invalid cntlid range [65520-65519]" 00:14:02.920 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.920 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29021 -I 0 00:14:02.920 [2024-11-15 11:37:28.353333] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29021: invalid cntlid range [1-0] 00:14:02.920 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:14:02.920 { 00:14:02.920 "nqn": "nqn.2016-06.io.spdk:cnode29021", 00:14:02.920 "max_cntlid": 0, 00:14:02.920 "method": "nvmf_create_subsystem", 00:14:02.920 "req_id": 1 00:14:02.920 } 00:14:02.920 Got JSON-RPC error response 00:14:02.920 response: 00:14:02.920 { 00:14:02.920 "code": -32602, 00:14:02.920 "message": "Invalid cntlid range [1-0]" 00:14:02.920 }' 00:14:02.920 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:14:02.920 { 00:14:02.920 "nqn": "nqn.2016-06.io.spdk:cnode29021", 00:14:02.920 "max_cntlid": 0, 00:14:02.920 "method": "nvmf_create_subsystem", 00:14:02.920 "req_id": 1 00:14:02.920 } 00:14:02.920 Got JSON-RPC error response 00:14:02.920 response: 00:14:02.920 { 00:14:02.920 "code": -32602, 00:14:02.920 "message": "Invalid cntlid range [1-0]" 00:14:02.920 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:02.920 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7380 -I 65520 00:14:03.179 [2024-11-15 11:37:28.533882] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7380: invalid cntlid range [1-65520] 00:14:03.179 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:14:03.180 { 00:14:03.180 "nqn": "nqn.2016-06.io.spdk:cnode7380", 00:14:03.180 "max_cntlid": 65520, 00:14:03.180 "method": "nvmf_create_subsystem", 00:14:03.180 "req_id": 1 00:14:03.180 } 00:14:03.180 Got JSON-RPC error response 00:14:03.180 response: 00:14:03.180 { 00:14:03.180 "code": -32602, 00:14:03.180 "message": "Invalid cntlid range [1-65520]" 00:14:03.180 }' 00:14:03.180 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:14:03.180 { 00:14:03.180 "nqn": "nqn.2016-06.io.spdk:cnode7380", 00:14:03.180 "max_cntlid": 65520, 00:14:03.180 "method": "nvmf_create_subsystem", 00:14:03.180 "req_id": 1 00:14:03.180 } 00:14:03.180 Got JSON-RPC error response 00:14:03.180 response: 00:14:03.180 { 00:14:03.180 "code": -32602, 00:14:03.180 "message": "Invalid cntlid range [1-65520]" 00:14:03.180 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:03.180 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6049 -i 6 -I 5 00:14:03.439 [2024-11-15 11:37:28.714456] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6049: invalid cntlid range [6-5] 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:14:03.439 { 00:14:03.439 "nqn": "nqn.2016-06.io.spdk:cnode6049", 00:14:03.439 "min_cntlid": 6, 00:14:03.439 "max_cntlid": 5, 00:14:03.439 "method": "nvmf_create_subsystem", 00:14:03.439 "req_id": 1 00:14:03.439 } 00:14:03.439 Got JSON-RPC error response 00:14:03.439 response: 00:14:03.439 { 00:14:03.439 "code": -32602, 00:14:03.439 "message": "Invalid cntlid range [6-5]" 00:14:03.439 }' 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:14:03.439 { 00:14:03.439 "nqn": "nqn.2016-06.io.spdk:cnode6049", 00:14:03.439 "min_cntlid": 6, 00:14:03.439 "max_cntlid": 5, 00:14:03.439 "method": "nvmf_create_subsystem", 00:14:03.439 "req_id": 1 00:14:03.439 } 00:14:03.439 Got JSON-RPC error response 00:14:03.439 response: 00:14:03.439 { 00:14:03.439 "code": -32602, 00:14:03.439 "message": "Invalid cntlid range [6-5]" 00:14:03.439 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:14:03.439 { 00:14:03.439 "name": "foobar", 00:14:03.439 "method": "nvmf_delete_target", 00:14:03.439 "req_id": 1 00:14:03.439 } 00:14:03.439 Got JSON-RPC error response 00:14:03.439 response: 00:14:03.439 { 00:14:03.439 "code": -32602, 00:14:03.439 "message": "The specified target doesn'\''t exist, cannot delete it." 00:14:03.439 }' 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:14:03.439 { 00:14:03.439 "name": "foobar", 00:14:03.439 "method": "nvmf_delete_target", 00:14:03.439 "req_id": 1 00:14:03.439 } 00:14:03.439 Got JSON-RPC error response 00:14:03.439 response: 00:14:03.439 { 00:14:03.439 "code": -32602, 00:14:03.439 "message": "The specified target doesn't exist, cannot delete it." 00:14:03.439 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:03.439 rmmod nvme_tcp 00:14:03.439 rmmod nvme_fabrics 00:14:03.439 rmmod nvme_keyring 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 997126 ']' 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 997126 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # '[' -z 997126 ']' 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # kill -0 997126 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # uname 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.439 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 997126 00:14:03.700 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:03.700 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:03.700 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 997126' 00:14:03.700 killing process with pid 997126 00:14:03.700 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@971 -- # kill 997126 00:14:03.700 11:37:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@976 -- # wait 997126 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.700 11:37:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:06.265 00:14:06.265 real 0m14.188s 00:14:06.265 user 0m20.946s 00:14:06.265 sys 0m6.774s 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:06.265 ************************************ 00:14:06.265 END TEST nvmf_invalid 00:14:06.265 ************************************ 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.265 ************************************ 00:14:06.265 START TEST nvmf_connect_stress 00:14:06.265 ************************************ 00:14:06.265 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:06.266 * Looking for test storage... 00:14:06.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:06.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.266 --rc genhtml_branch_coverage=1 00:14:06.266 --rc genhtml_function_coverage=1 00:14:06.266 --rc genhtml_legend=1 00:14:06.266 --rc geninfo_all_blocks=1 00:14:06.266 --rc geninfo_unexecuted_blocks=1 00:14:06.266 00:14:06.266 ' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:06.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.266 --rc genhtml_branch_coverage=1 00:14:06.266 --rc genhtml_function_coverage=1 00:14:06.266 --rc genhtml_legend=1 00:14:06.266 --rc geninfo_all_blocks=1 00:14:06.266 --rc geninfo_unexecuted_blocks=1 00:14:06.266 00:14:06.266 ' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:06.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.266 --rc genhtml_branch_coverage=1 00:14:06.266 --rc genhtml_function_coverage=1 00:14:06.266 --rc genhtml_legend=1 00:14:06.266 --rc geninfo_all_blocks=1 00:14:06.266 --rc geninfo_unexecuted_blocks=1 00:14:06.266 00:14:06.266 ' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:06.266 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.266 --rc genhtml_branch_coverage=1 00:14:06.266 --rc genhtml_function_coverage=1 00:14:06.266 --rc genhtml_legend=1 00:14:06.266 --rc geninfo_all_blocks=1 00:14:06.266 --rc geninfo_unexecuted_blocks=1 00:14:06.266 00:14:06.266 ' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:14:06.266 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:06.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:14:06.267 11:37:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:14.406 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:14.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:14.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:14.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:14.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:14.407 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:14.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:14.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.698 ms 00:14:14.408 00:14:14.408 --- 10.0.0.2 ping statistics --- 00:14:14.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.408 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:14.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:14.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:14:14.408 00:14:14.408 --- 10.0.0.1 ping statistics --- 00:14:14.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:14.408 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:14.408 11:37:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=1002316 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 1002316 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 1002316 ']' 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:14.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.408 [2024-11-15 11:37:39.059211] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:14.408 [2024-11-15 11:37:39.059278] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:14.408 [2024-11-15 11:37:39.158712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:14.408 [2024-11-15 11:37:39.209908] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:14.408 [2024-11-15 11:37:39.209957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:14.408 [2024-11-15 11:37:39.209966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:14.408 [2024-11-15 11:37:39.209973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:14.408 [2024-11-15 11:37:39.209979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:14.408 [2024-11-15 11:37:39.211889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:14.408 [2024-11-15 11:37:39.212051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:14.408 [2024-11-15 11:37:39.212052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:14.408 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.669 [2024-11-15 11:37:39.943995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.669 [2024-11-15 11:37:39.969609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:14.669 NULL1 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1002507 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:14.669 11:37:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.669 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.670 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.240 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.240 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:15.240 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.240 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.240 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.500 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.500 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:15.500 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.500 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.500 11:37:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:15.760 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.760 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:15.760 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.760 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.760 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.020 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.020 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:16.020 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.020 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.020 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.281 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.281 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:16.281 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.281 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.281 11:37:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:16.851 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.851 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:16.851 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.851 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.851 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.110 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.110 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:17.110 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.110 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.110 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.370 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.370 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:17.370 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.370 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.370 11:37:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.630 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.631 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:17.631 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.631 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.631 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:17.890 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.891 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:17.891 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.891 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.891 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.462 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.462 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:18.462 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.462 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.462 11:37:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.722 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.722 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:18.722 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.722 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.722 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:18.982 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.982 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:18.982 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.982 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.982 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.242 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.242 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:19.242 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.242 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.242 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:19.502 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.502 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:19.502 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.502 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.502 11:37:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.072 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.072 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:20.072 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.072 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.072 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.331 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.331 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:20.331 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.331 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.331 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.591 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.591 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:20.591 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.591 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.591 11:37:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:20.851 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.851 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:20.851 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.851 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.851 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.111 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.111 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:21.111 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.111 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.111 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.681 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.681 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:21.681 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.681 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.681 11:37:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:21.941 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.941 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:21.942 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.942 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.942 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.202 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.202 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:22.202 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.202 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.202 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:22.461 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.461 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:22.461 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.461 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.461 11:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.031 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.031 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:23.031 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.031 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.031 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.291 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.291 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:23.291 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.291 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.291 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.550 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.551 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:23.551 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.551 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.551 11:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:23.810 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.810 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:23.810 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:23.810 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.810 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.070 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.071 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:24.071 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.071 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.071 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.644 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.644 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:24.644 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:24.644 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.644 11:37:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:24.905 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1002507 00:14:24.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1002507) - No such process 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1002507 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.905 rmmod nvme_tcp 00:14:24.905 rmmod nvme_fabrics 00:14:24.905 rmmod nvme_keyring 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 1002316 ']' 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 1002316 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 1002316 ']' 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 1002316 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1002316 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1002316' 00:14:24.905 killing process with pid 1002316 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 1002316 00:14:24.905 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 1002316 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.165 11:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:27.077 00:14:27.077 real 0m21.260s 00:14:27.077 user 0m42.244s 00:14:27.077 sys 0m9.315s 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.077 ************************************ 00:14:27.077 END TEST nvmf_connect_stress 00:14:27.077 ************************************ 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:27.077 11:37:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:27.339 ************************************ 00:14:27.339 START TEST nvmf_fused_ordering 00:14:27.339 ************************************ 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:27.339 * Looking for test storage... 00:14:27.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.339 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.340 --rc genhtml_branch_coverage=1 00:14:27.340 --rc genhtml_function_coverage=1 00:14:27.340 --rc genhtml_legend=1 00:14:27.340 --rc geninfo_all_blocks=1 00:14:27.340 --rc geninfo_unexecuted_blocks=1 00:14:27.340 00:14:27.340 ' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.340 --rc genhtml_branch_coverage=1 00:14:27.340 --rc genhtml_function_coverage=1 00:14:27.340 --rc genhtml_legend=1 00:14:27.340 --rc geninfo_all_blocks=1 00:14:27.340 --rc geninfo_unexecuted_blocks=1 00:14:27.340 00:14:27.340 ' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.340 --rc genhtml_branch_coverage=1 00:14:27.340 --rc genhtml_function_coverage=1 00:14:27.340 --rc genhtml_legend=1 00:14:27.340 --rc geninfo_all_blocks=1 00:14:27.340 --rc geninfo_unexecuted_blocks=1 00:14:27.340 00:14:27.340 ' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:27.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.340 --rc genhtml_branch_coverage=1 00:14:27.340 --rc genhtml_function_coverage=1 00:14:27.340 --rc genhtml_legend=1 00:14:27.340 --rc geninfo_all_blocks=1 00:14:27.340 --rc geninfo_unexecuted_blocks=1 00:14:27.340 00:14:27.340 ' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:27.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:27.340 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.601 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:27.601 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:27.601 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:14:27.601 11:37:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:35.742 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:35.742 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:35.742 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:35.743 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:35.743 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:35.743 11:37:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:35.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:14:35.743 00:14:35.743 --- 10.0.0.2 ping statistics --- 00:14:35.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.743 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:14:35.743 00:14:35.743 --- 10.0.0.1 ping statistics --- 00:14:35.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.743 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=1008701 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 1008701 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 1008701 ']' 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:35.743 11:38:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 [2024-11-15 11:38:00.197339] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:35.743 [2024-11-15 11:38:00.197422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.743 [2024-11-15 11:38:00.301300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.743 [2024-11-15 11:38:00.351515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.743 [2024-11-15 11:38:00.351576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.743 [2024-11-15 11:38:00.351586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.743 [2024-11-15 11:38:00.351593] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.743 [2024-11-15 11:38:00.351599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.743 [2024-11-15 11:38:00.352367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 [2024-11-15 11:38:01.057469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 [2024-11-15 11:38:01.081780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 NULL1 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.743 11:38:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:35.743 [2024-11-15 11:38:01.152714] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:35.743 [2024-11-15 11:38:01.152758] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1008996 ] 00:14:36.315 Attached to nqn.2016-06.io.spdk:cnode1 00:14:36.315 Namespace ID: 1 size: 1GB 00:14:36.315 fused_ordering(0) 00:14:36.315 fused_ordering(1) 00:14:36.315 fused_ordering(2) 00:14:36.315 fused_ordering(3) 00:14:36.315 fused_ordering(4) 00:14:36.315 fused_ordering(5) 00:14:36.315 fused_ordering(6) 00:14:36.315 fused_ordering(7) 00:14:36.315 fused_ordering(8) 00:14:36.315 fused_ordering(9) 00:14:36.315 fused_ordering(10) 00:14:36.315 fused_ordering(11) 00:14:36.315 fused_ordering(12) 00:14:36.315 fused_ordering(13) 00:14:36.315 fused_ordering(14) 00:14:36.315 fused_ordering(15) 00:14:36.315 fused_ordering(16) 00:14:36.315 fused_ordering(17) 00:14:36.315 fused_ordering(18) 00:14:36.315 fused_ordering(19) 00:14:36.315 fused_ordering(20) 00:14:36.315 fused_ordering(21) 00:14:36.315 fused_ordering(22) 00:14:36.315 fused_ordering(23) 00:14:36.315 fused_ordering(24) 00:14:36.315 fused_ordering(25) 00:14:36.315 fused_ordering(26) 00:14:36.315 fused_ordering(27) 00:14:36.315 fused_ordering(28) 00:14:36.315 fused_ordering(29) 00:14:36.315 fused_ordering(30) 00:14:36.315 fused_ordering(31) 00:14:36.315 fused_ordering(32) 00:14:36.315 fused_ordering(33) 00:14:36.315 fused_ordering(34) 00:14:36.315 fused_ordering(35) 00:14:36.315 fused_ordering(36) 00:14:36.315 fused_ordering(37) 00:14:36.315 fused_ordering(38) 00:14:36.315 fused_ordering(39) 00:14:36.315 fused_ordering(40) 00:14:36.315 fused_ordering(41) 00:14:36.315 fused_ordering(42) 00:14:36.315 fused_ordering(43) 00:14:36.315 fused_ordering(44) 00:14:36.315 fused_ordering(45) 00:14:36.315 fused_ordering(46) 00:14:36.315 fused_ordering(47) 00:14:36.315 fused_ordering(48) 00:14:36.315 fused_ordering(49) 00:14:36.315 fused_ordering(50) 00:14:36.315 fused_ordering(51) 00:14:36.315 fused_ordering(52) 00:14:36.315 fused_ordering(53) 00:14:36.315 fused_ordering(54) 00:14:36.315 fused_ordering(55) 00:14:36.315 fused_ordering(56) 00:14:36.315 fused_ordering(57) 00:14:36.315 fused_ordering(58) 00:14:36.315 fused_ordering(59) 00:14:36.315 fused_ordering(60) 00:14:36.315 fused_ordering(61) 00:14:36.315 fused_ordering(62) 00:14:36.315 fused_ordering(63) 00:14:36.315 fused_ordering(64) 00:14:36.315 fused_ordering(65) 00:14:36.315 fused_ordering(66) 00:14:36.315 fused_ordering(67) 00:14:36.315 fused_ordering(68) 00:14:36.315 fused_ordering(69) 00:14:36.315 fused_ordering(70) 00:14:36.315 fused_ordering(71) 00:14:36.315 fused_ordering(72) 00:14:36.315 fused_ordering(73) 00:14:36.316 fused_ordering(74) 00:14:36.316 fused_ordering(75) 00:14:36.316 fused_ordering(76) 00:14:36.316 fused_ordering(77) 00:14:36.316 fused_ordering(78) 00:14:36.316 fused_ordering(79) 00:14:36.316 fused_ordering(80) 00:14:36.316 fused_ordering(81) 00:14:36.316 fused_ordering(82) 00:14:36.316 fused_ordering(83) 00:14:36.316 fused_ordering(84) 00:14:36.316 fused_ordering(85) 00:14:36.316 fused_ordering(86) 00:14:36.316 fused_ordering(87) 00:14:36.316 fused_ordering(88) 00:14:36.316 fused_ordering(89) 00:14:36.316 fused_ordering(90) 00:14:36.316 fused_ordering(91) 00:14:36.316 fused_ordering(92) 00:14:36.316 fused_ordering(93) 00:14:36.316 fused_ordering(94) 00:14:36.316 fused_ordering(95) 00:14:36.316 fused_ordering(96) 00:14:36.316 fused_ordering(97) 00:14:36.316 fused_ordering(98) 00:14:36.316 fused_ordering(99) 00:14:36.316 fused_ordering(100) 00:14:36.316 fused_ordering(101) 00:14:36.316 fused_ordering(102) 00:14:36.316 fused_ordering(103) 00:14:36.316 fused_ordering(104) 00:14:36.316 fused_ordering(105) 00:14:36.316 fused_ordering(106) 00:14:36.316 fused_ordering(107) 00:14:36.316 fused_ordering(108) 00:14:36.316 fused_ordering(109) 00:14:36.316 fused_ordering(110) 00:14:36.316 fused_ordering(111) 00:14:36.316 fused_ordering(112) 00:14:36.316 fused_ordering(113) 00:14:36.316 fused_ordering(114) 00:14:36.316 fused_ordering(115) 00:14:36.316 fused_ordering(116) 00:14:36.316 fused_ordering(117) 00:14:36.316 fused_ordering(118) 00:14:36.316 fused_ordering(119) 00:14:36.316 fused_ordering(120) 00:14:36.316 fused_ordering(121) 00:14:36.316 fused_ordering(122) 00:14:36.316 fused_ordering(123) 00:14:36.316 fused_ordering(124) 00:14:36.316 fused_ordering(125) 00:14:36.316 fused_ordering(126) 00:14:36.316 fused_ordering(127) 00:14:36.316 fused_ordering(128) 00:14:36.316 fused_ordering(129) 00:14:36.316 fused_ordering(130) 00:14:36.316 fused_ordering(131) 00:14:36.316 fused_ordering(132) 00:14:36.316 fused_ordering(133) 00:14:36.316 fused_ordering(134) 00:14:36.316 fused_ordering(135) 00:14:36.316 fused_ordering(136) 00:14:36.316 fused_ordering(137) 00:14:36.316 fused_ordering(138) 00:14:36.316 fused_ordering(139) 00:14:36.316 fused_ordering(140) 00:14:36.316 fused_ordering(141) 00:14:36.316 fused_ordering(142) 00:14:36.316 fused_ordering(143) 00:14:36.316 fused_ordering(144) 00:14:36.316 fused_ordering(145) 00:14:36.316 fused_ordering(146) 00:14:36.316 fused_ordering(147) 00:14:36.316 fused_ordering(148) 00:14:36.316 fused_ordering(149) 00:14:36.316 fused_ordering(150) 00:14:36.316 fused_ordering(151) 00:14:36.316 fused_ordering(152) 00:14:36.316 fused_ordering(153) 00:14:36.316 fused_ordering(154) 00:14:36.316 fused_ordering(155) 00:14:36.316 fused_ordering(156) 00:14:36.316 fused_ordering(157) 00:14:36.316 fused_ordering(158) 00:14:36.316 fused_ordering(159) 00:14:36.316 fused_ordering(160) 00:14:36.316 fused_ordering(161) 00:14:36.316 fused_ordering(162) 00:14:36.316 fused_ordering(163) 00:14:36.316 fused_ordering(164) 00:14:36.316 fused_ordering(165) 00:14:36.316 fused_ordering(166) 00:14:36.316 fused_ordering(167) 00:14:36.316 fused_ordering(168) 00:14:36.316 fused_ordering(169) 00:14:36.316 fused_ordering(170) 00:14:36.316 fused_ordering(171) 00:14:36.316 fused_ordering(172) 00:14:36.316 fused_ordering(173) 00:14:36.316 fused_ordering(174) 00:14:36.316 fused_ordering(175) 00:14:36.316 fused_ordering(176) 00:14:36.316 fused_ordering(177) 00:14:36.316 fused_ordering(178) 00:14:36.316 fused_ordering(179) 00:14:36.316 fused_ordering(180) 00:14:36.316 fused_ordering(181) 00:14:36.316 fused_ordering(182) 00:14:36.316 fused_ordering(183) 00:14:36.316 fused_ordering(184) 00:14:36.316 fused_ordering(185) 00:14:36.316 fused_ordering(186) 00:14:36.316 fused_ordering(187) 00:14:36.316 fused_ordering(188) 00:14:36.316 fused_ordering(189) 00:14:36.316 fused_ordering(190) 00:14:36.316 fused_ordering(191) 00:14:36.316 fused_ordering(192) 00:14:36.316 fused_ordering(193) 00:14:36.316 fused_ordering(194) 00:14:36.316 fused_ordering(195) 00:14:36.316 fused_ordering(196) 00:14:36.316 fused_ordering(197) 00:14:36.316 fused_ordering(198) 00:14:36.316 fused_ordering(199) 00:14:36.316 fused_ordering(200) 00:14:36.316 fused_ordering(201) 00:14:36.316 fused_ordering(202) 00:14:36.316 fused_ordering(203) 00:14:36.316 fused_ordering(204) 00:14:36.316 fused_ordering(205) 00:14:36.577 fused_ordering(206) 00:14:36.577 fused_ordering(207) 00:14:36.577 fused_ordering(208) 00:14:36.577 fused_ordering(209) 00:14:36.577 fused_ordering(210) 00:14:36.577 fused_ordering(211) 00:14:36.577 fused_ordering(212) 00:14:36.577 fused_ordering(213) 00:14:36.577 fused_ordering(214) 00:14:36.577 fused_ordering(215) 00:14:36.577 fused_ordering(216) 00:14:36.577 fused_ordering(217) 00:14:36.577 fused_ordering(218) 00:14:36.577 fused_ordering(219) 00:14:36.577 fused_ordering(220) 00:14:36.577 fused_ordering(221) 00:14:36.577 fused_ordering(222) 00:14:36.577 fused_ordering(223) 00:14:36.577 fused_ordering(224) 00:14:36.577 fused_ordering(225) 00:14:36.577 fused_ordering(226) 00:14:36.577 fused_ordering(227) 00:14:36.577 fused_ordering(228) 00:14:36.577 fused_ordering(229) 00:14:36.577 fused_ordering(230) 00:14:36.577 fused_ordering(231) 00:14:36.577 fused_ordering(232) 00:14:36.577 fused_ordering(233) 00:14:36.577 fused_ordering(234) 00:14:36.577 fused_ordering(235) 00:14:36.577 fused_ordering(236) 00:14:36.577 fused_ordering(237) 00:14:36.577 fused_ordering(238) 00:14:36.577 fused_ordering(239) 00:14:36.577 fused_ordering(240) 00:14:36.577 fused_ordering(241) 00:14:36.577 fused_ordering(242) 00:14:36.577 fused_ordering(243) 00:14:36.577 fused_ordering(244) 00:14:36.577 fused_ordering(245) 00:14:36.577 fused_ordering(246) 00:14:36.577 fused_ordering(247) 00:14:36.577 fused_ordering(248) 00:14:36.577 fused_ordering(249) 00:14:36.577 fused_ordering(250) 00:14:36.577 fused_ordering(251) 00:14:36.577 fused_ordering(252) 00:14:36.577 fused_ordering(253) 00:14:36.577 fused_ordering(254) 00:14:36.577 fused_ordering(255) 00:14:36.577 fused_ordering(256) 00:14:36.577 fused_ordering(257) 00:14:36.577 fused_ordering(258) 00:14:36.577 fused_ordering(259) 00:14:36.577 fused_ordering(260) 00:14:36.577 fused_ordering(261) 00:14:36.577 fused_ordering(262) 00:14:36.577 fused_ordering(263) 00:14:36.577 fused_ordering(264) 00:14:36.577 fused_ordering(265) 00:14:36.577 fused_ordering(266) 00:14:36.577 fused_ordering(267) 00:14:36.577 fused_ordering(268) 00:14:36.577 fused_ordering(269) 00:14:36.577 fused_ordering(270) 00:14:36.577 fused_ordering(271) 00:14:36.577 fused_ordering(272) 00:14:36.577 fused_ordering(273) 00:14:36.577 fused_ordering(274) 00:14:36.577 fused_ordering(275) 00:14:36.577 fused_ordering(276) 00:14:36.577 fused_ordering(277) 00:14:36.577 fused_ordering(278) 00:14:36.577 fused_ordering(279) 00:14:36.577 fused_ordering(280) 00:14:36.577 fused_ordering(281) 00:14:36.577 fused_ordering(282) 00:14:36.577 fused_ordering(283) 00:14:36.577 fused_ordering(284) 00:14:36.577 fused_ordering(285) 00:14:36.577 fused_ordering(286) 00:14:36.577 fused_ordering(287) 00:14:36.577 fused_ordering(288) 00:14:36.577 fused_ordering(289) 00:14:36.577 fused_ordering(290) 00:14:36.577 fused_ordering(291) 00:14:36.577 fused_ordering(292) 00:14:36.577 fused_ordering(293) 00:14:36.577 fused_ordering(294) 00:14:36.577 fused_ordering(295) 00:14:36.577 fused_ordering(296) 00:14:36.577 fused_ordering(297) 00:14:36.577 fused_ordering(298) 00:14:36.577 fused_ordering(299) 00:14:36.577 fused_ordering(300) 00:14:36.577 fused_ordering(301) 00:14:36.577 fused_ordering(302) 00:14:36.577 fused_ordering(303) 00:14:36.577 fused_ordering(304) 00:14:36.577 fused_ordering(305) 00:14:36.577 fused_ordering(306) 00:14:36.577 fused_ordering(307) 00:14:36.577 fused_ordering(308) 00:14:36.577 fused_ordering(309) 00:14:36.577 fused_ordering(310) 00:14:36.577 fused_ordering(311) 00:14:36.577 fused_ordering(312) 00:14:36.577 fused_ordering(313) 00:14:36.577 fused_ordering(314) 00:14:36.577 fused_ordering(315) 00:14:36.577 fused_ordering(316) 00:14:36.577 fused_ordering(317) 00:14:36.577 fused_ordering(318) 00:14:36.577 fused_ordering(319) 00:14:36.577 fused_ordering(320) 00:14:36.577 fused_ordering(321) 00:14:36.577 fused_ordering(322) 00:14:36.577 fused_ordering(323) 00:14:36.577 fused_ordering(324) 00:14:36.577 fused_ordering(325) 00:14:36.578 fused_ordering(326) 00:14:36.578 fused_ordering(327) 00:14:36.578 fused_ordering(328) 00:14:36.578 fused_ordering(329) 00:14:36.578 fused_ordering(330) 00:14:36.578 fused_ordering(331) 00:14:36.578 fused_ordering(332) 00:14:36.578 fused_ordering(333) 00:14:36.578 fused_ordering(334) 00:14:36.578 fused_ordering(335) 00:14:36.578 fused_ordering(336) 00:14:36.578 fused_ordering(337) 00:14:36.578 fused_ordering(338) 00:14:36.578 fused_ordering(339) 00:14:36.578 fused_ordering(340) 00:14:36.578 fused_ordering(341) 00:14:36.578 fused_ordering(342) 00:14:36.578 fused_ordering(343) 00:14:36.578 fused_ordering(344) 00:14:36.578 fused_ordering(345) 00:14:36.578 fused_ordering(346) 00:14:36.578 fused_ordering(347) 00:14:36.578 fused_ordering(348) 00:14:36.578 fused_ordering(349) 00:14:36.578 fused_ordering(350) 00:14:36.578 fused_ordering(351) 00:14:36.578 fused_ordering(352) 00:14:36.578 fused_ordering(353) 00:14:36.578 fused_ordering(354) 00:14:36.578 fused_ordering(355) 00:14:36.578 fused_ordering(356) 00:14:36.578 fused_ordering(357) 00:14:36.578 fused_ordering(358) 00:14:36.578 fused_ordering(359) 00:14:36.578 fused_ordering(360) 00:14:36.578 fused_ordering(361) 00:14:36.578 fused_ordering(362) 00:14:36.578 fused_ordering(363) 00:14:36.578 fused_ordering(364) 00:14:36.578 fused_ordering(365) 00:14:36.578 fused_ordering(366) 00:14:36.578 fused_ordering(367) 00:14:36.578 fused_ordering(368) 00:14:36.578 fused_ordering(369) 00:14:36.578 fused_ordering(370) 00:14:36.578 fused_ordering(371) 00:14:36.578 fused_ordering(372) 00:14:36.578 fused_ordering(373) 00:14:36.578 fused_ordering(374) 00:14:36.578 fused_ordering(375) 00:14:36.578 fused_ordering(376) 00:14:36.578 fused_ordering(377) 00:14:36.578 fused_ordering(378) 00:14:36.578 fused_ordering(379) 00:14:36.578 fused_ordering(380) 00:14:36.578 fused_ordering(381) 00:14:36.578 fused_ordering(382) 00:14:36.578 fused_ordering(383) 00:14:36.578 fused_ordering(384) 00:14:36.578 fused_ordering(385) 00:14:36.578 fused_ordering(386) 00:14:36.578 fused_ordering(387) 00:14:36.578 fused_ordering(388) 00:14:36.578 fused_ordering(389) 00:14:36.578 fused_ordering(390) 00:14:36.578 fused_ordering(391) 00:14:36.578 fused_ordering(392) 00:14:36.578 fused_ordering(393) 00:14:36.578 fused_ordering(394) 00:14:36.578 fused_ordering(395) 00:14:36.578 fused_ordering(396) 00:14:36.578 fused_ordering(397) 00:14:36.578 fused_ordering(398) 00:14:36.578 fused_ordering(399) 00:14:36.578 fused_ordering(400) 00:14:36.578 fused_ordering(401) 00:14:36.578 fused_ordering(402) 00:14:36.578 fused_ordering(403) 00:14:36.578 fused_ordering(404) 00:14:36.578 fused_ordering(405) 00:14:36.578 fused_ordering(406) 00:14:36.578 fused_ordering(407) 00:14:36.578 fused_ordering(408) 00:14:36.578 fused_ordering(409) 00:14:36.578 fused_ordering(410) 00:14:37.149 fused_ordering(411) 00:14:37.149 fused_ordering(412) 00:14:37.149 fused_ordering(413) 00:14:37.149 fused_ordering(414) 00:14:37.149 fused_ordering(415) 00:14:37.149 fused_ordering(416) 00:14:37.149 fused_ordering(417) 00:14:37.149 fused_ordering(418) 00:14:37.149 fused_ordering(419) 00:14:37.149 fused_ordering(420) 00:14:37.149 fused_ordering(421) 00:14:37.149 fused_ordering(422) 00:14:37.149 fused_ordering(423) 00:14:37.149 fused_ordering(424) 00:14:37.149 fused_ordering(425) 00:14:37.149 fused_ordering(426) 00:14:37.149 fused_ordering(427) 00:14:37.149 fused_ordering(428) 00:14:37.149 fused_ordering(429) 00:14:37.149 fused_ordering(430) 00:14:37.149 fused_ordering(431) 00:14:37.149 fused_ordering(432) 00:14:37.149 fused_ordering(433) 00:14:37.149 fused_ordering(434) 00:14:37.149 fused_ordering(435) 00:14:37.149 fused_ordering(436) 00:14:37.149 fused_ordering(437) 00:14:37.149 fused_ordering(438) 00:14:37.149 fused_ordering(439) 00:14:37.149 fused_ordering(440) 00:14:37.149 fused_ordering(441) 00:14:37.149 fused_ordering(442) 00:14:37.149 fused_ordering(443) 00:14:37.149 fused_ordering(444) 00:14:37.149 fused_ordering(445) 00:14:37.149 fused_ordering(446) 00:14:37.149 fused_ordering(447) 00:14:37.149 fused_ordering(448) 00:14:37.149 fused_ordering(449) 00:14:37.149 fused_ordering(450) 00:14:37.149 fused_ordering(451) 00:14:37.149 fused_ordering(452) 00:14:37.149 fused_ordering(453) 00:14:37.149 fused_ordering(454) 00:14:37.149 fused_ordering(455) 00:14:37.149 fused_ordering(456) 00:14:37.149 fused_ordering(457) 00:14:37.149 fused_ordering(458) 00:14:37.149 fused_ordering(459) 00:14:37.149 fused_ordering(460) 00:14:37.149 fused_ordering(461) 00:14:37.149 fused_ordering(462) 00:14:37.149 fused_ordering(463) 00:14:37.149 fused_ordering(464) 00:14:37.149 fused_ordering(465) 00:14:37.149 fused_ordering(466) 00:14:37.149 fused_ordering(467) 00:14:37.149 fused_ordering(468) 00:14:37.149 fused_ordering(469) 00:14:37.149 fused_ordering(470) 00:14:37.149 fused_ordering(471) 00:14:37.149 fused_ordering(472) 00:14:37.149 fused_ordering(473) 00:14:37.149 fused_ordering(474) 00:14:37.149 fused_ordering(475) 00:14:37.149 fused_ordering(476) 00:14:37.150 fused_ordering(477) 00:14:37.150 fused_ordering(478) 00:14:37.150 fused_ordering(479) 00:14:37.150 fused_ordering(480) 00:14:37.150 fused_ordering(481) 00:14:37.150 fused_ordering(482) 00:14:37.150 fused_ordering(483) 00:14:37.150 fused_ordering(484) 00:14:37.150 fused_ordering(485) 00:14:37.150 fused_ordering(486) 00:14:37.150 fused_ordering(487) 00:14:37.150 fused_ordering(488) 00:14:37.150 fused_ordering(489) 00:14:37.150 fused_ordering(490) 00:14:37.150 fused_ordering(491) 00:14:37.150 fused_ordering(492) 00:14:37.150 fused_ordering(493) 00:14:37.150 fused_ordering(494) 00:14:37.150 fused_ordering(495) 00:14:37.150 fused_ordering(496) 00:14:37.150 fused_ordering(497) 00:14:37.150 fused_ordering(498) 00:14:37.150 fused_ordering(499) 00:14:37.150 fused_ordering(500) 00:14:37.150 fused_ordering(501) 00:14:37.150 fused_ordering(502) 00:14:37.150 fused_ordering(503) 00:14:37.150 fused_ordering(504) 00:14:37.150 fused_ordering(505) 00:14:37.150 fused_ordering(506) 00:14:37.150 fused_ordering(507) 00:14:37.150 fused_ordering(508) 00:14:37.150 fused_ordering(509) 00:14:37.150 fused_ordering(510) 00:14:37.150 fused_ordering(511) 00:14:37.150 fused_ordering(512) 00:14:37.150 fused_ordering(513) 00:14:37.150 fused_ordering(514) 00:14:37.150 fused_ordering(515) 00:14:37.150 fused_ordering(516) 00:14:37.150 fused_ordering(517) 00:14:37.150 fused_ordering(518) 00:14:37.150 fused_ordering(519) 00:14:37.150 fused_ordering(520) 00:14:37.150 fused_ordering(521) 00:14:37.150 fused_ordering(522) 00:14:37.150 fused_ordering(523) 00:14:37.150 fused_ordering(524) 00:14:37.150 fused_ordering(525) 00:14:37.150 fused_ordering(526) 00:14:37.150 fused_ordering(527) 00:14:37.150 fused_ordering(528) 00:14:37.150 fused_ordering(529) 00:14:37.150 fused_ordering(530) 00:14:37.150 fused_ordering(531) 00:14:37.150 fused_ordering(532) 00:14:37.150 fused_ordering(533) 00:14:37.150 fused_ordering(534) 00:14:37.150 fused_ordering(535) 00:14:37.150 fused_ordering(536) 00:14:37.150 fused_ordering(537) 00:14:37.150 fused_ordering(538) 00:14:37.150 fused_ordering(539) 00:14:37.150 fused_ordering(540) 00:14:37.150 fused_ordering(541) 00:14:37.150 fused_ordering(542) 00:14:37.150 fused_ordering(543) 00:14:37.150 fused_ordering(544) 00:14:37.150 fused_ordering(545) 00:14:37.150 fused_ordering(546) 00:14:37.150 fused_ordering(547) 00:14:37.150 fused_ordering(548) 00:14:37.150 fused_ordering(549) 00:14:37.150 fused_ordering(550) 00:14:37.150 fused_ordering(551) 00:14:37.150 fused_ordering(552) 00:14:37.150 fused_ordering(553) 00:14:37.150 fused_ordering(554) 00:14:37.150 fused_ordering(555) 00:14:37.150 fused_ordering(556) 00:14:37.150 fused_ordering(557) 00:14:37.150 fused_ordering(558) 00:14:37.150 fused_ordering(559) 00:14:37.150 fused_ordering(560) 00:14:37.150 fused_ordering(561) 00:14:37.150 fused_ordering(562) 00:14:37.150 fused_ordering(563) 00:14:37.150 fused_ordering(564) 00:14:37.150 fused_ordering(565) 00:14:37.150 fused_ordering(566) 00:14:37.150 fused_ordering(567) 00:14:37.150 fused_ordering(568) 00:14:37.150 fused_ordering(569) 00:14:37.150 fused_ordering(570) 00:14:37.150 fused_ordering(571) 00:14:37.150 fused_ordering(572) 00:14:37.150 fused_ordering(573) 00:14:37.150 fused_ordering(574) 00:14:37.150 fused_ordering(575) 00:14:37.150 fused_ordering(576) 00:14:37.150 fused_ordering(577) 00:14:37.150 fused_ordering(578) 00:14:37.150 fused_ordering(579) 00:14:37.150 fused_ordering(580) 00:14:37.150 fused_ordering(581) 00:14:37.150 fused_ordering(582) 00:14:37.150 fused_ordering(583) 00:14:37.150 fused_ordering(584) 00:14:37.150 fused_ordering(585) 00:14:37.150 fused_ordering(586) 00:14:37.150 fused_ordering(587) 00:14:37.150 fused_ordering(588) 00:14:37.150 fused_ordering(589) 00:14:37.150 fused_ordering(590) 00:14:37.150 fused_ordering(591) 00:14:37.150 fused_ordering(592) 00:14:37.150 fused_ordering(593) 00:14:37.150 fused_ordering(594) 00:14:37.150 fused_ordering(595) 00:14:37.150 fused_ordering(596) 00:14:37.150 fused_ordering(597) 00:14:37.150 fused_ordering(598) 00:14:37.150 fused_ordering(599) 00:14:37.150 fused_ordering(600) 00:14:37.150 fused_ordering(601) 00:14:37.150 fused_ordering(602) 00:14:37.150 fused_ordering(603) 00:14:37.150 fused_ordering(604) 00:14:37.150 fused_ordering(605) 00:14:37.150 fused_ordering(606) 00:14:37.150 fused_ordering(607) 00:14:37.150 fused_ordering(608) 00:14:37.150 fused_ordering(609) 00:14:37.150 fused_ordering(610) 00:14:37.150 fused_ordering(611) 00:14:37.150 fused_ordering(612) 00:14:37.150 fused_ordering(613) 00:14:37.150 fused_ordering(614) 00:14:37.150 fused_ordering(615) 00:14:37.723 fused_ordering(616) 00:14:37.723 fused_ordering(617) 00:14:37.723 fused_ordering(618) 00:14:37.723 fused_ordering(619) 00:14:37.723 fused_ordering(620) 00:14:37.723 fused_ordering(621) 00:14:37.723 fused_ordering(622) 00:14:37.723 fused_ordering(623) 00:14:37.723 fused_ordering(624) 00:14:37.723 fused_ordering(625) 00:14:37.723 fused_ordering(626) 00:14:37.723 fused_ordering(627) 00:14:37.723 fused_ordering(628) 00:14:37.723 fused_ordering(629) 00:14:37.723 fused_ordering(630) 00:14:37.723 fused_ordering(631) 00:14:37.723 fused_ordering(632) 00:14:37.723 fused_ordering(633) 00:14:37.723 fused_ordering(634) 00:14:37.723 fused_ordering(635) 00:14:37.723 fused_ordering(636) 00:14:37.723 fused_ordering(637) 00:14:37.723 fused_ordering(638) 00:14:37.723 fused_ordering(639) 00:14:37.723 fused_ordering(640) 00:14:37.723 fused_ordering(641) 00:14:37.723 fused_ordering(642) 00:14:37.723 fused_ordering(643) 00:14:37.723 fused_ordering(644) 00:14:37.723 fused_ordering(645) 00:14:37.723 fused_ordering(646) 00:14:37.723 fused_ordering(647) 00:14:37.723 fused_ordering(648) 00:14:37.723 fused_ordering(649) 00:14:37.723 fused_ordering(650) 00:14:37.723 fused_ordering(651) 00:14:37.723 fused_ordering(652) 00:14:37.723 fused_ordering(653) 00:14:37.723 fused_ordering(654) 00:14:37.723 fused_ordering(655) 00:14:37.723 fused_ordering(656) 00:14:37.723 fused_ordering(657) 00:14:37.723 fused_ordering(658) 00:14:37.723 fused_ordering(659) 00:14:37.723 fused_ordering(660) 00:14:37.723 fused_ordering(661) 00:14:37.723 fused_ordering(662) 00:14:37.723 fused_ordering(663) 00:14:37.723 fused_ordering(664) 00:14:37.723 fused_ordering(665) 00:14:37.723 fused_ordering(666) 00:14:37.723 fused_ordering(667) 00:14:37.723 fused_ordering(668) 00:14:37.723 fused_ordering(669) 00:14:37.723 fused_ordering(670) 00:14:37.723 fused_ordering(671) 00:14:37.723 fused_ordering(672) 00:14:37.723 fused_ordering(673) 00:14:37.723 fused_ordering(674) 00:14:37.723 fused_ordering(675) 00:14:37.723 fused_ordering(676) 00:14:37.723 fused_ordering(677) 00:14:37.723 fused_ordering(678) 00:14:37.723 fused_ordering(679) 00:14:37.723 fused_ordering(680) 00:14:37.723 fused_ordering(681) 00:14:37.723 fused_ordering(682) 00:14:37.723 fused_ordering(683) 00:14:37.723 fused_ordering(684) 00:14:37.723 fused_ordering(685) 00:14:37.723 fused_ordering(686) 00:14:37.723 fused_ordering(687) 00:14:37.723 fused_ordering(688) 00:14:37.723 fused_ordering(689) 00:14:37.723 fused_ordering(690) 00:14:37.723 fused_ordering(691) 00:14:37.723 fused_ordering(692) 00:14:37.723 fused_ordering(693) 00:14:37.723 fused_ordering(694) 00:14:37.723 fused_ordering(695) 00:14:37.723 fused_ordering(696) 00:14:37.723 fused_ordering(697) 00:14:37.723 fused_ordering(698) 00:14:37.723 fused_ordering(699) 00:14:37.723 fused_ordering(700) 00:14:37.723 fused_ordering(701) 00:14:37.723 fused_ordering(702) 00:14:37.723 fused_ordering(703) 00:14:37.723 fused_ordering(704) 00:14:37.723 fused_ordering(705) 00:14:37.723 fused_ordering(706) 00:14:37.723 fused_ordering(707) 00:14:37.723 fused_ordering(708) 00:14:37.723 fused_ordering(709) 00:14:37.723 fused_ordering(710) 00:14:37.723 fused_ordering(711) 00:14:37.723 fused_ordering(712) 00:14:37.723 fused_ordering(713) 00:14:37.723 fused_ordering(714) 00:14:37.723 fused_ordering(715) 00:14:37.723 fused_ordering(716) 00:14:37.723 fused_ordering(717) 00:14:37.723 fused_ordering(718) 00:14:37.723 fused_ordering(719) 00:14:37.723 fused_ordering(720) 00:14:37.723 fused_ordering(721) 00:14:37.723 fused_ordering(722) 00:14:37.723 fused_ordering(723) 00:14:37.723 fused_ordering(724) 00:14:37.723 fused_ordering(725) 00:14:37.723 fused_ordering(726) 00:14:37.723 fused_ordering(727) 00:14:37.723 fused_ordering(728) 00:14:37.723 fused_ordering(729) 00:14:37.723 fused_ordering(730) 00:14:37.723 fused_ordering(731) 00:14:37.723 fused_ordering(732) 00:14:37.723 fused_ordering(733) 00:14:37.723 fused_ordering(734) 00:14:37.723 fused_ordering(735) 00:14:37.723 fused_ordering(736) 00:14:37.723 fused_ordering(737) 00:14:37.723 fused_ordering(738) 00:14:37.723 fused_ordering(739) 00:14:37.723 fused_ordering(740) 00:14:37.723 fused_ordering(741) 00:14:37.723 fused_ordering(742) 00:14:37.723 fused_ordering(743) 00:14:37.723 fused_ordering(744) 00:14:37.723 fused_ordering(745) 00:14:37.723 fused_ordering(746) 00:14:37.723 fused_ordering(747) 00:14:37.723 fused_ordering(748) 00:14:37.723 fused_ordering(749) 00:14:37.723 fused_ordering(750) 00:14:37.723 fused_ordering(751) 00:14:37.723 fused_ordering(752) 00:14:37.723 fused_ordering(753) 00:14:37.723 fused_ordering(754) 00:14:37.723 fused_ordering(755) 00:14:37.723 fused_ordering(756) 00:14:37.723 fused_ordering(757) 00:14:37.723 fused_ordering(758) 00:14:37.723 fused_ordering(759) 00:14:37.723 fused_ordering(760) 00:14:37.723 fused_ordering(761) 00:14:37.723 fused_ordering(762) 00:14:37.723 fused_ordering(763) 00:14:37.723 fused_ordering(764) 00:14:37.723 fused_ordering(765) 00:14:37.723 fused_ordering(766) 00:14:37.723 fused_ordering(767) 00:14:37.723 fused_ordering(768) 00:14:37.723 fused_ordering(769) 00:14:37.723 fused_ordering(770) 00:14:37.723 fused_ordering(771) 00:14:37.723 fused_ordering(772) 00:14:37.723 fused_ordering(773) 00:14:37.723 fused_ordering(774) 00:14:37.723 fused_ordering(775) 00:14:37.723 fused_ordering(776) 00:14:37.723 fused_ordering(777) 00:14:37.723 fused_ordering(778) 00:14:37.723 fused_ordering(779) 00:14:37.723 fused_ordering(780) 00:14:37.723 fused_ordering(781) 00:14:37.723 fused_ordering(782) 00:14:37.723 fused_ordering(783) 00:14:37.723 fused_ordering(784) 00:14:37.723 fused_ordering(785) 00:14:37.723 fused_ordering(786) 00:14:37.723 fused_ordering(787) 00:14:37.723 fused_ordering(788) 00:14:37.723 fused_ordering(789) 00:14:37.723 fused_ordering(790) 00:14:37.723 fused_ordering(791) 00:14:37.723 fused_ordering(792) 00:14:37.723 fused_ordering(793) 00:14:37.723 fused_ordering(794) 00:14:37.723 fused_ordering(795) 00:14:37.723 fused_ordering(796) 00:14:37.723 fused_ordering(797) 00:14:37.723 fused_ordering(798) 00:14:37.723 fused_ordering(799) 00:14:37.723 fused_ordering(800) 00:14:37.723 fused_ordering(801) 00:14:37.723 fused_ordering(802) 00:14:37.723 fused_ordering(803) 00:14:37.723 fused_ordering(804) 00:14:37.723 fused_ordering(805) 00:14:37.723 fused_ordering(806) 00:14:37.723 fused_ordering(807) 00:14:37.723 fused_ordering(808) 00:14:37.723 fused_ordering(809) 00:14:37.723 fused_ordering(810) 00:14:37.723 fused_ordering(811) 00:14:37.723 fused_ordering(812) 00:14:37.723 fused_ordering(813) 00:14:37.723 fused_ordering(814) 00:14:37.723 fused_ordering(815) 00:14:37.723 fused_ordering(816) 00:14:37.723 fused_ordering(817) 00:14:37.723 fused_ordering(818) 00:14:37.723 fused_ordering(819) 00:14:37.723 fused_ordering(820) 00:14:38.296 fused_ordering(821) 00:14:38.296 fused_ordering(822) 00:14:38.296 fused_ordering(823) 00:14:38.296 fused_ordering(824) 00:14:38.296 fused_ordering(825) 00:14:38.296 fused_ordering(826) 00:14:38.296 fused_ordering(827) 00:14:38.296 fused_ordering(828) 00:14:38.296 fused_ordering(829) 00:14:38.296 fused_ordering(830) 00:14:38.296 fused_ordering(831) 00:14:38.296 fused_ordering(832) 00:14:38.296 fused_ordering(833) 00:14:38.296 fused_ordering(834) 00:14:38.296 fused_ordering(835) 00:14:38.296 fused_ordering(836) 00:14:38.296 fused_ordering(837) 00:14:38.296 fused_ordering(838) 00:14:38.296 fused_ordering(839) 00:14:38.296 fused_ordering(840) 00:14:38.296 fused_ordering(841) 00:14:38.296 fused_ordering(842) 00:14:38.296 fused_ordering(843) 00:14:38.296 fused_ordering(844) 00:14:38.296 fused_ordering(845) 00:14:38.296 fused_ordering(846) 00:14:38.296 fused_ordering(847) 00:14:38.296 fused_ordering(848) 00:14:38.296 fused_ordering(849) 00:14:38.296 fused_ordering(850) 00:14:38.296 fused_ordering(851) 00:14:38.296 fused_ordering(852) 00:14:38.296 fused_ordering(853) 00:14:38.296 fused_ordering(854) 00:14:38.296 fused_ordering(855) 00:14:38.296 fused_ordering(856) 00:14:38.296 fused_ordering(857) 00:14:38.296 fused_ordering(858) 00:14:38.296 fused_ordering(859) 00:14:38.296 fused_ordering(860) 00:14:38.296 fused_ordering(861) 00:14:38.296 fused_ordering(862) 00:14:38.296 fused_ordering(863) 00:14:38.296 fused_ordering(864) 00:14:38.296 fused_ordering(865) 00:14:38.296 fused_ordering(866) 00:14:38.296 fused_ordering(867) 00:14:38.296 fused_ordering(868) 00:14:38.296 fused_ordering(869) 00:14:38.296 fused_ordering(870) 00:14:38.296 fused_ordering(871) 00:14:38.296 fused_ordering(872) 00:14:38.296 fused_ordering(873) 00:14:38.296 fused_ordering(874) 00:14:38.296 fused_ordering(875) 00:14:38.296 fused_ordering(876) 00:14:38.296 fused_ordering(877) 00:14:38.296 fused_ordering(878) 00:14:38.296 fused_ordering(879) 00:14:38.296 fused_ordering(880) 00:14:38.296 fused_ordering(881) 00:14:38.296 fused_ordering(882) 00:14:38.296 fused_ordering(883) 00:14:38.296 fused_ordering(884) 00:14:38.296 fused_ordering(885) 00:14:38.296 fused_ordering(886) 00:14:38.296 fused_ordering(887) 00:14:38.296 fused_ordering(888) 00:14:38.296 fused_ordering(889) 00:14:38.296 fused_ordering(890) 00:14:38.296 fused_ordering(891) 00:14:38.296 fused_ordering(892) 00:14:38.296 fused_ordering(893) 00:14:38.296 fused_ordering(894) 00:14:38.296 fused_ordering(895) 00:14:38.296 fused_ordering(896) 00:14:38.296 fused_ordering(897) 00:14:38.296 fused_ordering(898) 00:14:38.296 fused_ordering(899) 00:14:38.296 fused_ordering(900) 00:14:38.296 fused_ordering(901) 00:14:38.296 fused_ordering(902) 00:14:38.296 fused_ordering(903) 00:14:38.296 fused_ordering(904) 00:14:38.296 fused_ordering(905) 00:14:38.296 fused_ordering(906) 00:14:38.296 fused_ordering(907) 00:14:38.296 fused_ordering(908) 00:14:38.296 fused_ordering(909) 00:14:38.296 fused_ordering(910) 00:14:38.296 fused_ordering(911) 00:14:38.296 fused_ordering(912) 00:14:38.296 fused_ordering(913) 00:14:38.296 fused_ordering(914) 00:14:38.296 fused_ordering(915) 00:14:38.296 fused_ordering(916) 00:14:38.296 fused_ordering(917) 00:14:38.296 fused_ordering(918) 00:14:38.296 fused_ordering(919) 00:14:38.296 fused_ordering(920) 00:14:38.296 fused_ordering(921) 00:14:38.296 fused_ordering(922) 00:14:38.296 fused_ordering(923) 00:14:38.296 fused_ordering(924) 00:14:38.296 fused_ordering(925) 00:14:38.296 fused_ordering(926) 00:14:38.296 fused_ordering(927) 00:14:38.296 fused_ordering(928) 00:14:38.296 fused_ordering(929) 00:14:38.296 fused_ordering(930) 00:14:38.296 fused_ordering(931) 00:14:38.296 fused_ordering(932) 00:14:38.296 fused_ordering(933) 00:14:38.296 fused_ordering(934) 00:14:38.296 fused_ordering(935) 00:14:38.296 fused_ordering(936) 00:14:38.296 fused_ordering(937) 00:14:38.296 fused_ordering(938) 00:14:38.296 fused_ordering(939) 00:14:38.296 fused_ordering(940) 00:14:38.296 fused_ordering(941) 00:14:38.296 fused_ordering(942) 00:14:38.297 fused_ordering(943) 00:14:38.297 fused_ordering(944) 00:14:38.297 fused_ordering(945) 00:14:38.297 fused_ordering(946) 00:14:38.297 fused_ordering(947) 00:14:38.297 fused_ordering(948) 00:14:38.297 fused_ordering(949) 00:14:38.297 fused_ordering(950) 00:14:38.297 fused_ordering(951) 00:14:38.297 fused_ordering(952) 00:14:38.297 fused_ordering(953) 00:14:38.297 fused_ordering(954) 00:14:38.297 fused_ordering(955) 00:14:38.297 fused_ordering(956) 00:14:38.297 fused_ordering(957) 00:14:38.297 fused_ordering(958) 00:14:38.297 fused_ordering(959) 00:14:38.297 fused_ordering(960) 00:14:38.297 fused_ordering(961) 00:14:38.297 fused_ordering(962) 00:14:38.297 fused_ordering(963) 00:14:38.297 fused_ordering(964) 00:14:38.297 fused_ordering(965) 00:14:38.297 fused_ordering(966) 00:14:38.297 fused_ordering(967) 00:14:38.297 fused_ordering(968) 00:14:38.297 fused_ordering(969) 00:14:38.297 fused_ordering(970) 00:14:38.297 fused_ordering(971) 00:14:38.297 fused_ordering(972) 00:14:38.297 fused_ordering(973) 00:14:38.297 fused_ordering(974) 00:14:38.297 fused_ordering(975) 00:14:38.297 fused_ordering(976) 00:14:38.297 fused_ordering(977) 00:14:38.297 fused_ordering(978) 00:14:38.297 fused_ordering(979) 00:14:38.297 fused_ordering(980) 00:14:38.297 fused_ordering(981) 00:14:38.297 fused_ordering(982) 00:14:38.297 fused_ordering(983) 00:14:38.297 fused_ordering(984) 00:14:38.297 fused_ordering(985) 00:14:38.297 fused_ordering(986) 00:14:38.297 fused_ordering(987) 00:14:38.297 fused_ordering(988) 00:14:38.297 fused_ordering(989) 00:14:38.297 fused_ordering(990) 00:14:38.297 fused_ordering(991) 00:14:38.297 fused_ordering(992) 00:14:38.297 fused_ordering(993) 00:14:38.297 fused_ordering(994) 00:14:38.297 fused_ordering(995) 00:14:38.297 fused_ordering(996) 00:14:38.297 fused_ordering(997) 00:14:38.297 fused_ordering(998) 00:14:38.297 fused_ordering(999) 00:14:38.297 fused_ordering(1000) 00:14:38.297 fused_ordering(1001) 00:14:38.297 fused_ordering(1002) 00:14:38.297 fused_ordering(1003) 00:14:38.297 fused_ordering(1004) 00:14:38.297 fused_ordering(1005) 00:14:38.297 fused_ordering(1006) 00:14:38.297 fused_ordering(1007) 00:14:38.297 fused_ordering(1008) 00:14:38.297 fused_ordering(1009) 00:14:38.297 fused_ordering(1010) 00:14:38.297 fused_ordering(1011) 00:14:38.297 fused_ordering(1012) 00:14:38.297 fused_ordering(1013) 00:14:38.297 fused_ordering(1014) 00:14:38.297 fused_ordering(1015) 00:14:38.297 fused_ordering(1016) 00:14:38.297 fused_ordering(1017) 00:14:38.297 fused_ordering(1018) 00:14:38.297 fused_ordering(1019) 00:14:38.297 fused_ordering(1020) 00:14:38.297 fused_ordering(1021) 00:14:38.297 fused_ordering(1022) 00:14:38.297 fused_ordering(1023) 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:38.297 rmmod nvme_tcp 00:14:38.297 rmmod nvme_fabrics 00:14:38.297 rmmod nvme_keyring 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 1008701 ']' 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 1008701 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 1008701 ']' 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 1008701 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1008701 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1008701' 00:14:38.297 killing process with pid 1008701 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 1008701 00:14:38.297 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 1008701 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:38.558 11:38:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.106 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:41.106 00:14:41.106 real 0m13.399s 00:14:41.106 user 0m7.327s 00:14:41.106 sys 0m7.007s 00:14:41.106 11:38:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:41.106 ************************************ 00:14:41.106 END TEST nvmf_fused_ordering 00:14:41.106 ************************************ 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:41.106 ************************************ 00:14:41.106 START TEST nvmf_ns_masking 00:14:41.106 ************************************ 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:41.106 * Looking for test storage... 00:14:41.106 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.106 --rc genhtml_branch_coverage=1 00:14:41.106 --rc genhtml_function_coverage=1 00:14:41.106 --rc genhtml_legend=1 00:14:41.106 --rc geninfo_all_blocks=1 00:14:41.106 --rc geninfo_unexecuted_blocks=1 00:14:41.106 00:14:41.106 ' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.106 --rc genhtml_branch_coverage=1 00:14:41.106 --rc genhtml_function_coverage=1 00:14:41.106 --rc genhtml_legend=1 00:14:41.106 --rc geninfo_all_blocks=1 00:14:41.106 --rc geninfo_unexecuted_blocks=1 00:14:41.106 00:14:41.106 ' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.106 --rc genhtml_branch_coverage=1 00:14:41.106 --rc genhtml_function_coverage=1 00:14:41.106 --rc genhtml_legend=1 00:14:41.106 --rc geninfo_all_blocks=1 00:14:41.106 --rc geninfo_unexecuted_blocks=1 00:14:41.106 00:14:41.106 ' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:41.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:41.106 --rc genhtml_branch_coverage=1 00:14:41.106 --rc genhtml_function_coverage=1 00:14:41.106 --rc genhtml_legend=1 00:14:41.106 --rc geninfo_all_blocks=1 00:14:41.106 --rc geninfo_unexecuted_blocks=1 00:14:41.106 00:14:41.106 ' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:41.106 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=3f63d7ff-8c78-4ed8-95af-d4b1051fcdfa 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=36ed62e2-45cd-444e-ad6f-8f1e3b76ce39 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=dcaf5abc-d9e9-4d79-a40a-4d9d1a874b7b 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:41.106 11:38:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:49.246 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:49.247 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:49.247 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:49.247 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:49.247 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:49.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:14:49.247 00:14:49.247 --- 10.0.0.2 ping statistics --- 00:14:49.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.247 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:14:49.247 00:14:49.247 --- 10.0.0.1 ping statistics --- 00:14:49.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.247 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.247 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=1013730 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 1013730 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1013730 ']' 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:49.248 11:38:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.248 [2024-11-15 11:38:13.961950] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:14:49.248 [2024-11-15 11:38:13.962017] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.248 [2024-11-15 11:38:14.063277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.248 [2024-11-15 11:38:14.114276] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.248 [2024-11-15 11:38:14.114331] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.248 [2024-11-15 11:38:14.114340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:49.248 [2024-11-15 11:38:14.114347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:49.248 [2024-11-15 11:38:14.114353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.248 [2024-11-15 11:38:14.115127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.509 11:38:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:49.509 [2024-11-15 11:38:15.003584] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.770 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:49.770 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:49.770 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:49.770 Malloc1 00:14:49.770 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:50.031 Malloc2 00:14:50.031 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:50.292 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:50.553 11:38:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.553 [2024-11-15 11:38:15.980461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.553 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:50.553 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcaf5abc-d9e9-4d79-a40a-4d9d1a874b7b -a 10.0.0.2 -s 4420 -i 4 00:14:50.814 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:50.814 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:50.814 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:50.814 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:50.814 11:38:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:52.730 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:52.991 [ 0]:0x1 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e51432d856c453e8d7669bc766bedef 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e51432d856c453e8d7669bc766bedef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:52.991 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:53.252 [ 0]:0x1 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e51432d856c453e8d7669bc766bedef 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e51432d856c453e8d7669bc766bedef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:53.252 [ 1]:0x2 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:53.252 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.513 11:38:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.775 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:53.775 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:53.775 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcaf5abc-d9e9-4d79-a40a-4d9d1a874b7b -a 10.0.0.2 -s 4420 -i 4 00:14:54.037 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:54.037 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:54.037 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.037 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:54.037 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:54.037 11:38:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:55.951 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:55.951 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.212 [ 0]:0x2 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.212 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:56.474 [ 0]:0x1 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e51432d856c453e8d7669bc766bedef 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e51432d856c453e8d7669bc766bedef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:56.474 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.474 [ 1]:0x2 00:14:56.736 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.736 11:38:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.736 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:56.997 [ 0]:0x2 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.997 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.997 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:57.259 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:57.259 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I dcaf5abc-d9e9-4d79-a40a-4d9d1a874b7b -a 10.0.0.2 -s 4420 -i 4 00:14:57.519 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:57.519 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:57.519 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.519 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:57.519 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:57.519 11:38:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:59.430 [ 0]:0x1 00:14:59.430 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=9e51432d856c453e8d7669bc766bedef 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 9e51432d856c453e8d7669bc766bedef != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.431 [ 1]:0x2 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.431 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.692 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:14:59.692 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.692 11:38:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.692 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:59.954 [ 0]:0x2 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:59.954 [2024-11-15 11:38:25.410291] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:59.954 request: 00:14:59.954 { 00:14:59.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:59.954 "nsid": 2, 00:14:59.954 "host": "nqn.2016-06.io.spdk:host1", 00:14:59.954 "method": "nvmf_ns_remove_host", 00:14:59.954 "req_id": 1 00:14:59.954 } 00:14:59.954 Got JSON-RPC error response 00:14:59.954 response: 00:14:59.954 { 00:14:59.954 "code": -32602, 00:14:59.954 "message": "Invalid parameters" 00:14:59.954 } 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:59.954 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:59.955 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:00.217 [ 0]:0x2 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=08e85fea984f494fa5691dd99e92ccc4 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 08e85fea984f494fa5691dd99e92ccc4 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:00.217 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.479 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1016116 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1016116 /var/tmp/host.sock 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 1016116 ']' 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:00.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:00.479 11:38:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:00.479 [2024-11-15 11:38:25.814190] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:00.479 [2024-11-15 11:38:25.814246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016116 ] 00:15:00.479 [2024-11-15 11:38:25.902098] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:00.479 [2024-11-15 11:38:25.938069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.420 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:01.420 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:15:01.420 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.420 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.682 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 3f63d7ff-8c78-4ed8-95af-d4b1051fcdfa 00:15:01.682 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:01.682 11:38:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3F63D7FF8C784ED895AFD4B1051FCDFA -i 00:15:01.682 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 36ed62e2-45cd-444e-ad6f-8f1e3b76ce39 00:15:01.682 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:01.682 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 36ED62E245CD444EAD6F8F1E3B76CE39 -i 00:15:01.943 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:02.204 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:02.204 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:02.204 11:38:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:02.776 nvme0n1 00:15:02.776 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:02.776 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:03.037 nvme1n2 00:15:03.037 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:03.037 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:03.037 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:03.037 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:03.037 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 3f63d7ff-8c78-4ed8-95af-d4b1051fcdfa == \3\f\6\3\d\7\f\f\-\8\c\7\8\-\4\e\d\8\-\9\5\a\f\-\d\4\b\1\0\5\1\f\c\d\f\a ]] 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:03.298 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:03.559 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 36ed62e2-45cd-444e-ad6f-8f1e3b76ce39 == \3\6\e\d\6\2\e\2\-\4\5\c\d\-\4\4\4\e\-\a\d\6\f\-\8\f\1\e\3\b\7\6\c\e\3\9 ]] 00:15:03.559 11:38:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 3f63d7ff-8c78-4ed8-95af-d4b1051fcdfa 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3F63D7FF8C784ED895AFD4B1051FCDFA 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3F63D7FF8C784ED895AFD4B1051FCDFA 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:03.820 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 3F63D7FF8C784ED895AFD4B1051FCDFA 00:15:04.081 [2024-11-15 11:38:29.408757] bdev.c:8653:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:15:04.081 [2024-11-15 11:38:29.408787] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:15:04.081 [2024-11-15 11:38:29.408794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:04.081 request: 00:15:04.081 { 00:15:04.081 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:04.081 "namespace": { 00:15:04.081 "bdev_name": "invalid", 00:15:04.081 "nsid": 1, 00:15:04.081 "nguid": "3F63D7FF8C784ED895AFD4B1051FCDFA", 00:15:04.081 "no_auto_visible": false, 00:15:04.081 "no_metadata": false 00:15:04.081 }, 00:15:04.081 "method": "nvmf_subsystem_add_ns", 00:15:04.081 "req_id": 1 00:15:04.081 } 00:15:04.081 Got JSON-RPC error response 00:15:04.081 response: 00:15:04.081 { 00:15:04.081 "code": -32602, 00:15:04.081 "message": "Invalid parameters" 00:15:04.081 } 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 3f63d7ff-8c78-4ed8-95af-d4b1051fcdfa 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:04.081 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 3F63D7FF8C784ED895AFD4B1051FCDFA -i 00:15:04.341 11:38:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:15:06.252 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:15:06.252 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:15:06.252 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1016116 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1016116 ']' 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1016116 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1016116 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1016116' 00:15:06.512 killing process with pid 1016116 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1016116 00:15:06.512 11:38:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1016116 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:06.772 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:06.772 rmmod nvme_tcp 00:15:06.772 rmmod nvme_fabrics 00:15:06.772 rmmod nvme_keyring 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 1013730 ']' 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 1013730 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 1013730 ']' 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 1013730 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1013730 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1013730' 00:15:07.033 killing process with pid 1013730 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 1013730 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 1013730 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:07.033 11:38:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:09.581 00:15:09.581 real 0m28.472s 00:15:09.581 user 0m32.265s 00:15:09.581 sys 0m8.293s 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 ************************************ 00:15:09.581 END TEST nvmf_ns_masking 00:15:09.581 ************************************ 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:09.581 ************************************ 00:15:09.581 START TEST nvmf_nvme_cli 00:15:09.581 ************************************ 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:09.581 * Looking for test storage... 00:15:09.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:09.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.581 --rc genhtml_branch_coverage=1 00:15:09.581 --rc genhtml_function_coverage=1 00:15:09.581 --rc genhtml_legend=1 00:15:09.581 --rc geninfo_all_blocks=1 00:15:09.581 --rc geninfo_unexecuted_blocks=1 00:15:09.581 00:15:09.581 ' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:09.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.581 --rc genhtml_branch_coverage=1 00:15:09.581 --rc genhtml_function_coverage=1 00:15:09.581 --rc genhtml_legend=1 00:15:09.581 --rc geninfo_all_blocks=1 00:15:09.581 --rc geninfo_unexecuted_blocks=1 00:15:09.581 00:15:09.581 ' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:09.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.581 --rc genhtml_branch_coverage=1 00:15:09.581 --rc genhtml_function_coverage=1 00:15:09.581 --rc genhtml_legend=1 00:15:09.581 --rc geninfo_all_blocks=1 00:15:09.581 --rc geninfo_unexecuted_blocks=1 00:15:09.581 00:15:09.581 ' 00:15:09.581 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:09.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:09.581 --rc genhtml_branch_coverage=1 00:15:09.581 --rc genhtml_function_coverage=1 00:15:09.581 --rc genhtml_legend=1 00:15:09.581 --rc geninfo_all_blocks=1 00:15:09.581 --rc geninfo_unexecuted_blocks=1 00:15:09.581 00:15:09.581 ' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:09.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:15:09.582 11:38:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:17.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:17.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:17.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:17.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:17.730 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:17.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.537 ms 00:15:17.731 00:15:17.731 --- 10.0.0.2 ping statistics --- 00:15:17.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.731 rtt min/avg/max/mdev = 0.537/0.537/0.537/0.000 ms 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:17.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:15:17.731 00:15:17.731 --- 10.0.0.1 ping statistics --- 00:15:17.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.731 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=1021621 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 1021621 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 1021621 ']' 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:17.731 11:38:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.731 [2024-11-15 11:38:42.460182] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:17.731 [2024-11-15 11:38:42.460253] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.731 [2024-11-15 11:38:42.565006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:17.731 [2024-11-15 11:38:42.619511] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:17.731 [2024-11-15 11:38:42.619588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:17.731 [2024-11-15 11:38:42.619597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:17.731 [2024-11-15 11:38:42.619605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:17.731 [2024-11-15 11:38:42.619611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:17.731 [2024-11-15 11:38:42.621849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:17.731 [2024-11-15 11:38:42.622008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:17.731 [2024-11-15 11:38:42.622169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.731 [2024-11-15 11:38:42.622169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 [2024-11-15 11:38:43.336204] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 Malloc0 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 Malloc1 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 [2024-11-15 11:38:43.460409] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.994 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:15:18.255 00:15:18.255 Discovery Log Number of Records 2, Generation counter 2 00:15:18.255 =====Discovery Log Entry 0====== 00:15:18.255 trtype: tcp 00:15:18.255 adrfam: ipv4 00:15:18.255 subtype: current discovery subsystem 00:15:18.255 treq: not required 00:15:18.255 portid: 0 00:15:18.255 trsvcid: 4420 00:15:18.255 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:18.255 traddr: 10.0.0.2 00:15:18.255 eflags: explicit discovery connections, duplicate discovery information 00:15:18.255 sectype: none 00:15:18.255 =====Discovery Log Entry 1====== 00:15:18.255 trtype: tcp 00:15:18.255 adrfam: ipv4 00:15:18.255 subtype: nvme subsystem 00:15:18.255 treq: not required 00:15:18.255 portid: 0 00:15:18.255 trsvcid: 4420 00:15:18.255 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:18.255 traddr: 10.0.0.2 00:15:18.255 eflags: none 00:15:18.255 sectype: none 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:18.255 11:38:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:20.171 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:20.171 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:15:20.171 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.171 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:15:20.171 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:15:20.171 11:38:45 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:15:22.084 /dev/nvme0n2 ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:22.084 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.345 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.345 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:15:22.345 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:22.345 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.345 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:22.345 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:22.606 rmmod nvme_tcp 00:15:22.606 rmmod nvme_fabrics 00:15:22.606 rmmod nvme_keyring 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 1021621 ']' 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 1021621 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 1021621 ']' 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 1021621 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:22.606 11:38:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1021621 00:15:22.606 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:22.606 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:22.606 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1021621' 00:15:22.606 killing process with pid 1021621 00:15:22.606 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 1021621 00:15:22.606 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 1021621 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:22.868 11:38:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:24.882 00:15:24.882 real 0m15.590s 00:15:24.882 user 0m24.276s 00:15:24.882 sys 0m6.425s 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:24.882 ************************************ 00:15:24.882 END TEST nvmf_nvme_cli 00:15:24.882 ************************************ 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.882 ************************************ 00:15:24.882 START TEST nvmf_vfio_user 00:15:24.882 ************************************ 00:15:24.882 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:25.157 * Looking for test storage... 00:15:25.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:25.157 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.158 --rc genhtml_branch_coverage=1 00:15:25.158 --rc genhtml_function_coverage=1 00:15:25.158 --rc genhtml_legend=1 00:15:25.158 --rc geninfo_all_blocks=1 00:15:25.158 --rc geninfo_unexecuted_blocks=1 00:15:25.158 00:15:25.158 ' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.158 --rc genhtml_branch_coverage=1 00:15:25.158 --rc genhtml_function_coverage=1 00:15:25.158 --rc genhtml_legend=1 00:15:25.158 --rc geninfo_all_blocks=1 00:15:25.158 --rc geninfo_unexecuted_blocks=1 00:15:25.158 00:15:25.158 ' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.158 --rc genhtml_branch_coverage=1 00:15:25.158 --rc genhtml_function_coverage=1 00:15:25.158 --rc genhtml_legend=1 00:15:25.158 --rc geninfo_all_blocks=1 00:15:25.158 --rc geninfo_unexecuted_blocks=1 00:15:25.158 00:15:25.158 ' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:25.158 --rc genhtml_branch_coverage=1 00:15:25.158 --rc genhtml_function_coverage=1 00:15:25.158 --rc genhtml_legend=1 00:15:25.158 --rc geninfo_all_blocks=1 00:15:25.158 --rc geninfo_unexecuted_blocks=1 00:15:25.158 00:15:25.158 ' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:25.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1023433 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1023433' 00:15:25.158 Process pid: 1023433 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1023433 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1023433 ']' 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:25.158 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:25.158 [2024-11-15 11:38:50.610602] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:25.158 [2024-11-15 11:38:50.610660] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.446 [2024-11-15 11:38:50.670990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.446 [2024-11-15 11:38:50.700870] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.446 [2024-11-15 11:38:50.700900] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.446 [2024-11-15 11:38:50.700906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.446 [2024-11-15 11:38:50.700910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.446 [2024-11-15 11:38:50.700915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.446 [2024-11-15 11:38:50.702186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.446 [2024-11-15 11:38:50.702340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.446 [2024-11-15 11:38:50.702484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.446 [2024-11-15 11:38:50.702487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.446 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:25.446 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:25.446 11:38:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:26.395 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:26.667 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:26.667 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:26.667 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:26.667 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:26.667 11:38:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:26.927 Malloc1 00:15:26.927 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:26.927 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:27.187 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:27.446 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:27.446 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:27.446 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:27.446 Malloc2 00:15:27.706 11:38:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:27.706 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:27.966 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:28.229 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:28.229 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:28.229 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:28.229 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:28.229 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:28.229 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:28.229 [2024-11-15 11:38:53.498335] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:28.229 [2024-11-15 11:38:53.498360] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023874 ] 00:15:28.229 [2024-11-15 11:38:53.534878] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:28.229 [2024-11-15 11:38:53.540125] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:28.229 [2024-11-15 11:38:53.540143] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f669eebf000 00:15:28.229 [2024-11-15 11:38:53.541118] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.542119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.543127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.544134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.545134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.546143] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.547150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.548153] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.229 [2024-11-15 11:38:53.549161] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:28.229 [2024-11-15 11:38:53.549168] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f669eeb4000 00:15:28.229 [2024-11-15 11:38:53.550082] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:28.229 [2024-11-15 11:38:53.559610] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:28.229 [2024-11-15 11:38:53.559630] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:15:28.229 [2024-11-15 11:38:53.565257] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:28.229 [2024-11-15 11:38:53.565289] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:28.229 [2024-11-15 11:38:53.565349] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:15:28.229 [2024-11-15 11:38:53.565361] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:15:28.229 [2024-11-15 11:38:53.565365] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:15:28.229 [2024-11-15 11:38:53.566259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:28.229 [2024-11-15 11:38:53.566267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:15:28.229 [2024-11-15 11:38:53.566272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:15:28.229 [2024-11-15 11:38:53.567267] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:28.229 [2024-11-15 11:38:53.567274] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:15:28.229 [2024-11-15 11:38:53.567279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:28.229 [2024-11-15 11:38:53.568274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:28.229 [2024-11-15 11:38:53.568281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:28.229 [2024-11-15 11:38:53.569284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:28.229 [2024-11-15 11:38:53.569289] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:28.229 [2024-11-15 11:38:53.569293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:28.229 [2024-11-15 11:38:53.569298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:28.229 [2024-11-15 11:38:53.569403] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:15:28.229 [2024-11-15 11:38:53.569407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:28.229 [2024-11-15 11:38:53.569411] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:28.229 [2024-11-15 11:38:53.570292] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:28.229 [2024-11-15 11:38:53.571301] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:28.229 [2024-11-15 11:38:53.572303] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:28.229 [2024-11-15 11:38:53.573308] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:28.229 [2024-11-15 11:38:53.573370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:28.229 [2024-11-15 11:38:53.574315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:28.229 [2024-11-15 11:38:53.574323] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:28.229 [2024-11-15 11:38:53.574326] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:28.229 [2024-11-15 11:38:53.574341] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:15:28.229 [2024-11-15 11:38:53.574346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:28.229 [2024-11-15 11:38:53.574357] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:28.229 [2024-11-15 11:38:53.574361] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.229 [2024-11-15 11:38:53.574365] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.229 [2024-11-15 11:38:53.574375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.229 [2024-11-15 11:38:53.574410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:28.229 [2024-11-15 11:38:53.574417] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:15:28.229 [2024-11-15 11:38:53.574421] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:15:28.229 [2024-11-15 11:38:53.574424] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:15:28.229 [2024-11-15 11:38:53.574428] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:28.229 [2024-11-15 11:38:53.574433] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:15:28.229 [2024-11-15 11:38:53.574436] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:15:28.230 [2024-11-15 11:38:53.574439] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574446] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574453] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.230 [2024-11-15 11:38:53.574476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.230 [2024-11-15 11:38:53.574482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.230 [2024-11-15 11:38:53.574488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.230 [2024-11-15 11:38:53.574491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574516] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:15:28.230 [2024-11-15 11:38:53.574519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574536] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574602] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:28.230 [2024-11-15 11:38:53.574606] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:28.230 [2024-11-15 11:38:53.574609] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.230 [2024-11-15 11:38:53.574614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574629] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:15:28.230 [2024-11-15 11:38:53.574636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574642] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574648] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:28.230 [2024-11-15 11:38:53.574652] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.230 [2024-11-15 11:38:53.574655] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.230 [2024-11-15 11:38:53.574659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574692] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574697] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:28.230 [2024-11-15 11:38:53.574700] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.230 [2024-11-15 11:38:53.574702] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.230 [2024-11-15 11:38:53.574707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574738] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574745] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574749] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:28.230 [2024-11-15 11:38:53.574752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:15:28.230 [2024-11-15 11:38:53.574756] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:15:28.230 [2024-11-15 11:38:53.574769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574784] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:28.230 [2024-11-15 11:38:53.574837] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:28.230 [2024-11-15 11:38:53.574840] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:28.230 [2024-11-15 11:38:53.574843] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:28.230 [2024-11-15 11:38:53.574845] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:28.230 [2024-11-15 11:38:53.574847] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:28.230 [2024-11-15 11:38:53.574852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:28.230 [2024-11-15 11:38:53.574857] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:28.230 [2024-11-15 11:38:53.574860] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:28.230 [2024-11-15 11:38:53.574863] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.230 [2024-11-15 11:38:53.574867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574872] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:28.230 [2024-11-15 11:38:53.574875] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.230 [2024-11-15 11:38:53.574878] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.230 [2024-11-15 11:38:53.574883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.230 [2024-11-15 11:38:53.574888] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:28.230 [2024-11-15 11:38:53.574891] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:28.231 [2024-11-15 11:38:53.574894] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:28.231 [2024-11-15 11:38:53.574898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:28.231 [2024-11-15 11:38:53.574903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:28.231 [2024-11-15 11:38:53.574912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:28.231 [2024-11-15 11:38:53.574920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:28.231 [2024-11-15 11:38:53.574925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:28.231 ===================================================== 00:15:28.231 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:28.231 ===================================================== 00:15:28.231 Controller Capabilities/Features 00:15:28.231 ================================ 00:15:28.231 Vendor ID: 4e58 00:15:28.231 Subsystem Vendor ID: 4e58 00:15:28.231 Serial Number: SPDK1 00:15:28.231 Model Number: SPDK bdev Controller 00:15:28.231 Firmware Version: 25.01 00:15:28.231 Recommended Arb Burst: 6 00:15:28.231 IEEE OUI Identifier: 8d 6b 50 00:15:28.231 Multi-path I/O 00:15:28.231 May have multiple subsystem ports: Yes 00:15:28.231 May have multiple controllers: Yes 00:15:28.231 Associated with SR-IOV VF: No 00:15:28.231 Max Data Transfer Size: 131072 00:15:28.231 Max Number of Namespaces: 32 00:15:28.231 Max Number of I/O Queues: 127 00:15:28.231 NVMe Specification Version (VS): 1.3 00:15:28.231 NVMe Specification Version (Identify): 1.3 00:15:28.231 Maximum Queue Entries: 256 00:15:28.231 Contiguous Queues Required: Yes 00:15:28.231 Arbitration Mechanisms Supported 00:15:28.231 Weighted Round Robin: Not Supported 00:15:28.231 Vendor Specific: Not Supported 00:15:28.231 Reset Timeout: 15000 ms 00:15:28.231 Doorbell Stride: 4 bytes 00:15:28.231 NVM Subsystem Reset: Not Supported 00:15:28.231 Command Sets Supported 00:15:28.231 NVM Command Set: Supported 00:15:28.231 Boot Partition: Not Supported 00:15:28.231 Memory Page Size Minimum: 4096 bytes 00:15:28.231 Memory Page Size Maximum: 4096 bytes 00:15:28.231 Persistent Memory Region: Not Supported 00:15:28.231 Optional Asynchronous Events Supported 00:15:28.231 Namespace Attribute Notices: Supported 00:15:28.231 Firmware Activation Notices: Not Supported 00:15:28.231 ANA Change Notices: Not Supported 00:15:28.231 PLE Aggregate Log Change Notices: Not Supported 00:15:28.231 LBA Status Info Alert Notices: Not Supported 00:15:28.231 EGE Aggregate Log Change Notices: Not Supported 00:15:28.231 Normal NVM Subsystem Shutdown event: Not Supported 00:15:28.231 Zone Descriptor Change Notices: Not Supported 00:15:28.231 Discovery Log Change Notices: Not Supported 00:15:28.231 Controller Attributes 00:15:28.231 128-bit Host Identifier: Supported 00:15:28.231 Non-Operational Permissive Mode: Not Supported 00:15:28.231 NVM Sets: Not Supported 00:15:28.231 Read Recovery Levels: Not Supported 00:15:28.231 Endurance Groups: Not Supported 00:15:28.231 Predictable Latency Mode: Not Supported 00:15:28.231 Traffic Based Keep ALive: Not Supported 00:15:28.231 Namespace Granularity: Not Supported 00:15:28.231 SQ Associations: Not Supported 00:15:28.231 UUID List: Not Supported 00:15:28.231 Multi-Domain Subsystem: Not Supported 00:15:28.231 Fixed Capacity Management: Not Supported 00:15:28.231 Variable Capacity Management: Not Supported 00:15:28.231 Delete Endurance Group: Not Supported 00:15:28.231 Delete NVM Set: Not Supported 00:15:28.231 Extended LBA Formats Supported: Not Supported 00:15:28.231 Flexible Data Placement Supported: Not Supported 00:15:28.231 00:15:28.231 Controller Memory Buffer Support 00:15:28.231 ================================ 00:15:28.231 Supported: No 00:15:28.231 00:15:28.231 Persistent Memory Region Support 00:15:28.231 ================================ 00:15:28.231 Supported: No 00:15:28.231 00:15:28.231 Admin Command Set Attributes 00:15:28.231 ============================ 00:15:28.231 Security Send/Receive: Not Supported 00:15:28.231 Format NVM: Not Supported 00:15:28.231 Firmware Activate/Download: Not Supported 00:15:28.231 Namespace Management: Not Supported 00:15:28.231 Device Self-Test: Not Supported 00:15:28.231 Directives: Not Supported 00:15:28.231 NVMe-MI: Not Supported 00:15:28.231 Virtualization Management: Not Supported 00:15:28.231 Doorbell Buffer Config: Not Supported 00:15:28.231 Get LBA Status Capability: Not Supported 00:15:28.231 Command & Feature Lockdown Capability: Not Supported 00:15:28.231 Abort Command Limit: 4 00:15:28.231 Async Event Request Limit: 4 00:15:28.231 Number of Firmware Slots: N/A 00:15:28.231 Firmware Slot 1 Read-Only: N/A 00:15:28.231 Firmware Activation Without Reset: N/A 00:15:28.231 Multiple Update Detection Support: N/A 00:15:28.231 Firmware Update Granularity: No Information Provided 00:15:28.231 Per-Namespace SMART Log: No 00:15:28.231 Asymmetric Namespace Access Log Page: Not Supported 00:15:28.231 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:28.231 Command Effects Log Page: Supported 00:15:28.231 Get Log Page Extended Data: Supported 00:15:28.231 Telemetry Log Pages: Not Supported 00:15:28.231 Persistent Event Log Pages: Not Supported 00:15:28.231 Supported Log Pages Log Page: May Support 00:15:28.231 Commands Supported & Effects Log Page: Not Supported 00:15:28.231 Feature Identifiers & Effects Log Page:May Support 00:15:28.231 NVMe-MI Commands & Effects Log Page: May Support 00:15:28.231 Data Area 4 for Telemetry Log: Not Supported 00:15:28.231 Error Log Page Entries Supported: 128 00:15:28.231 Keep Alive: Supported 00:15:28.231 Keep Alive Granularity: 10000 ms 00:15:28.231 00:15:28.231 NVM Command Set Attributes 00:15:28.231 ========================== 00:15:28.231 Submission Queue Entry Size 00:15:28.231 Max: 64 00:15:28.231 Min: 64 00:15:28.231 Completion Queue Entry Size 00:15:28.231 Max: 16 00:15:28.231 Min: 16 00:15:28.231 Number of Namespaces: 32 00:15:28.231 Compare Command: Supported 00:15:28.231 Write Uncorrectable Command: Not Supported 00:15:28.231 Dataset Management Command: Supported 00:15:28.231 Write Zeroes Command: Supported 00:15:28.231 Set Features Save Field: Not Supported 00:15:28.231 Reservations: Not Supported 00:15:28.231 Timestamp: Not Supported 00:15:28.231 Copy: Supported 00:15:28.231 Volatile Write Cache: Present 00:15:28.231 Atomic Write Unit (Normal): 1 00:15:28.231 Atomic Write Unit (PFail): 1 00:15:28.231 Atomic Compare & Write Unit: 1 00:15:28.231 Fused Compare & Write: Supported 00:15:28.231 Scatter-Gather List 00:15:28.231 SGL Command Set: Supported (Dword aligned) 00:15:28.231 SGL Keyed: Not Supported 00:15:28.231 SGL Bit Bucket Descriptor: Not Supported 00:15:28.231 SGL Metadata Pointer: Not Supported 00:15:28.231 Oversized SGL: Not Supported 00:15:28.231 SGL Metadata Address: Not Supported 00:15:28.231 SGL Offset: Not Supported 00:15:28.231 Transport SGL Data Block: Not Supported 00:15:28.231 Replay Protected Memory Block: Not Supported 00:15:28.231 00:15:28.231 Firmware Slot Information 00:15:28.231 ========================= 00:15:28.231 Active slot: 1 00:15:28.231 Slot 1 Firmware Revision: 25.01 00:15:28.231 00:15:28.231 00:15:28.231 Commands Supported and Effects 00:15:28.231 ============================== 00:15:28.231 Admin Commands 00:15:28.231 -------------- 00:15:28.231 Get Log Page (02h): Supported 00:15:28.231 Identify (06h): Supported 00:15:28.231 Abort (08h): Supported 00:15:28.231 Set Features (09h): Supported 00:15:28.231 Get Features (0Ah): Supported 00:15:28.231 Asynchronous Event Request (0Ch): Supported 00:15:28.232 Keep Alive (18h): Supported 00:15:28.232 I/O Commands 00:15:28.232 ------------ 00:15:28.232 Flush (00h): Supported LBA-Change 00:15:28.232 Write (01h): Supported LBA-Change 00:15:28.232 Read (02h): Supported 00:15:28.232 Compare (05h): Supported 00:15:28.232 Write Zeroes (08h): Supported LBA-Change 00:15:28.232 Dataset Management (09h): Supported LBA-Change 00:15:28.232 Copy (19h): Supported LBA-Change 00:15:28.232 00:15:28.232 Error Log 00:15:28.232 ========= 00:15:28.232 00:15:28.232 Arbitration 00:15:28.232 =========== 00:15:28.232 Arbitration Burst: 1 00:15:28.232 00:15:28.232 Power Management 00:15:28.232 ================ 00:15:28.232 Number of Power States: 1 00:15:28.232 Current Power State: Power State #0 00:15:28.232 Power State #0: 00:15:28.232 Max Power: 0.00 W 00:15:28.232 Non-Operational State: Operational 00:15:28.232 Entry Latency: Not Reported 00:15:28.232 Exit Latency: Not Reported 00:15:28.232 Relative Read Throughput: 0 00:15:28.232 Relative Read Latency: 0 00:15:28.232 Relative Write Throughput: 0 00:15:28.232 Relative Write Latency: 0 00:15:28.232 Idle Power: Not Reported 00:15:28.232 Active Power: Not Reported 00:15:28.232 Non-Operational Permissive Mode: Not Supported 00:15:28.232 00:15:28.232 Health Information 00:15:28.232 ================== 00:15:28.232 Critical Warnings: 00:15:28.232 Available Spare Space: OK 00:15:28.232 Temperature: OK 00:15:28.232 Device Reliability: OK 00:15:28.232 Read Only: No 00:15:28.232 Volatile Memory Backup: OK 00:15:28.232 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:28.232 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:28.232 Available Spare: 0% 00:15:28.232 Available Sp[2024-11-15 11:38:53.575000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:28.232 [2024-11-15 11:38:53.575006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:28.232 [2024-11-15 11:38:53.575027] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:15:28.232 [2024-11-15 11:38:53.575034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.232 [2024-11-15 11:38:53.575039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.232 [2024-11-15 11:38:53.575043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.232 [2024-11-15 11:38:53.575048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.232 [2024-11-15 11:38:53.575326] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:28.232 [2024-11-15 11:38:53.575334] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:28.232 [2024-11-15 11:38:53.576330] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.232 [2024-11-15 11:38:53.576371] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:15:28.232 [2024-11-15 11:38:53.576377] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:15:28.232 [2024-11-15 11:38:53.577341] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:28.232 [2024-11-15 11:38:53.577349] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:15:28.232 [2024-11-15 11:38:53.577397] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:28.232 [2024-11-15 11:38:53.579570] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:28.232 are Threshold: 0% 00:15:28.232 Life Percentage Used: 0% 00:15:28.232 Data Units Read: 0 00:15:28.232 Data Units Written: 0 00:15:28.232 Host Read Commands: 0 00:15:28.232 Host Write Commands: 0 00:15:28.232 Controller Busy Time: 0 minutes 00:15:28.232 Power Cycles: 0 00:15:28.232 Power On Hours: 0 hours 00:15:28.232 Unsafe Shutdowns: 0 00:15:28.232 Unrecoverable Media Errors: 0 00:15:28.232 Lifetime Error Log Entries: 0 00:15:28.232 Warning Temperature Time: 0 minutes 00:15:28.232 Critical Temperature Time: 0 minutes 00:15:28.232 00:15:28.232 Number of Queues 00:15:28.232 ================ 00:15:28.232 Number of I/O Submission Queues: 127 00:15:28.232 Number of I/O Completion Queues: 127 00:15:28.232 00:15:28.232 Active Namespaces 00:15:28.232 ================= 00:15:28.232 Namespace ID:1 00:15:28.232 Error Recovery Timeout: Unlimited 00:15:28.232 Command Set Identifier: NVM (00h) 00:15:28.232 Deallocate: Supported 00:15:28.232 Deallocated/Unwritten Error: Not Supported 00:15:28.232 Deallocated Read Value: Unknown 00:15:28.232 Deallocate in Write Zeroes: Not Supported 00:15:28.232 Deallocated Guard Field: 0xFFFF 00:15:28.232 Flush: Supported 00:15:28.232 Reservation: Supported 00:15:28.232 Namespace Sharing Capabilities: Multiple Controllers 00:15:28.232 Size (in LBAs): 131072 (0GiB) 00:15:28.232 Capacity (in LBAs): 131072 (0GiB) 00:15:28.232 Utilization (in LBAs): 131072 (0GiB) 00:15:28.232 NGUID: CA8460D1EA9F48D483F276C69BB9073F 00:15:28.232 UUID: ca8460d1-ea9f-48d4-83f2-76c69bb9073f 00:15:28.232 Thin Provisioning: Not Supported 00:15:28.232 Per-NS Atomic Units: Yes 00:15:28.232 Atomic Boundary Size (Normal): 0 00:15:28.232 Atomic Boundary Size (PFail): 0 00:15:28.232 Atomic Boundary Offset: 0 00:15:28.232 Maximum Single Source Range Length: 65535 00:15:28.232 Maximum Copy Length: 65535 00:15:28.232 Maximum Source Range Count: 1 00:15:28.232 NGUID/EUI64 Never Reused: No 00:15:28.232 Namespace Write Protected: No 00:15:28.232 Number of LBA Formats: 1 00:15:28.232 Current LBA Format: LBA Format #00 00:15:28.232 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:28.232 00:15:28.232 11:38:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:28.493 [2024-11-15 11:38:53.746188] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:33.782 Initializing NVMe Controllers 00:15:33.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:33.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:33.782 Initialization complete. Launching workers. 00:15:33.782 ======================================================== 00:15:33.782 Latency(us) 00:15:33.782 Device Information : IOPS MiB/s Average min max 00:15:33.782 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39991.20 156.22 3200.93 855.32 7763.11 00:15:33.782 ======================================================== 00:15:33.782 Total : 39991.20 156.22 3200.93 855.32 7763.11 00:15:33.782 00:15:33.782 [2024-11-15 11:38:58.767344] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:33.782 11:38:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:33.782 [2024-11-15 11:38:58.937158] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:39.062 Initializing NVMe Controllers 00:15:39.062 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:39.062 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:39.062 Initialization complete. Launching workers. 00:15:39.062 ======================================================== 00:15:39.062 Latency(us) 00:15:39.062 Device Information : IOPS MiB/s Average min max 00:15:39.062 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16044.04 62.67 7977.51 4989.68 9978.09 00:15:39.062 ======================================================== 00:15:39.062 Total : 16044.04 62.67 7977.51 4989.68 9978.09 00:15:39.062 00:15:39.062 [2024-11-15 11:39:03.969320] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:39.062 11:39:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:39.062 [2024-11-15 11:39:04.168156] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:44.346 [2024-11-15 11:39:09.237767] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:44.346 Initializing NVMe Controllers 00:15:44.346 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:44.346 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:44.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:44.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:44.346 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:44.346 Initialization complete. Launching workers. 00:15:44.346 Starting thread on core 2 00:15:44.346 Starting thread on core 3 00:15:44.346 Starting thread on core 1 00:15:44.347 11:39:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:44.347 [2024-11-15 11:39:09.482899] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.649 [2024-11-15 11:39:12.534974] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.649 Initializing NVMe Controllers 00:15:47.649 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.649 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.649 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:47.649 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:47.649 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:47.649 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:47.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:47.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:47.649 Initialization complete. Launching workers. 00:15:47.649 Starting thread on core 1 with urgent priority queue 00:15:47.649 Starting thread on core 2 with urgent priority queue 00:15:47.649 Starting thread on core 3 with urgent priority queue 00:15:47.649 Starting thread on core 0 with urgent priority queue 00:15:47.649 SPDK bdev Controller (SPDK1 ) core 0: 8877.67 IO/s 11.26 secs/100000 ios 00:15:47.649 SPDK bdev Controller (SPDK1 ) core 1: 12100.33 IO/s 8.26 secs/100000 ios 00:15:47.649 SPDK bdev Controller (SPDK1 ) core 2: 8902.33 IO/s 11.23 secs/100000 ios 00:15:47.649 SPDK bdev Controller (SPDK1 ) core 3: 11264.33 IO/s 8.88 secs/100000 ios 00:15:47.649 ======================================================== 00:15:47.649 00:15:47.649 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:47.649 [2024-11-15 11:39:12.778993] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:47.649 Initializing NVMe Controllers 00:15:47.649 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.649 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:47.649 Namespace ID: 1 size: 0GB 00:15:47.649 Initialization complete. 00:15:47.649 INFO: using host memory buffer for IO 00:15:47.649 Hello world! 00:15:47.649 [2024-11-15 11:39:12.813208] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:47.649 11:39:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:47.649 [2024-11-15 11:39:13.046963] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:48.588 Initializing NVMe Controllers 00:15:48.588 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.588 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:48.588 Initialization complete. Launching workers. 00:15:48.588 submit (in ns) avg, min, max = 6525.6, 2818.3, 3998285.8 00:15:48.588 complete (in ns) avg, min, max = 16645.4, 1640.0, 5992635.0 00:15:48.588 00:15:48.588 Submit histogram 00:15:48.588 ================ 00:15:48.588 Range in us Cumulative Count 00:15:48.588 2.813 - 2.827: 0.1582% ( 32) 00:15:48.588 2.827 - 2.840: 0.8555% ( 141) 00:15:48.588 2.840 - 2.853: 3.0610% ( 446) 00:15:48.588 2.853 - 2.867: 7.5462% ( 907) 00:15:48.588 2.867 - 2.880: 12.6001% ( 1022) 00:15:48.588 2.880 - 2.893: 18.6876% ( 1231) 00:15:48.588 2.893 - 2.907: 24.1618% ( 1107) 00:15:48.588 2.907 - 2.920: 29.0970% ( 998) 00:15:48.588 2.920 - 2.933: 34.7938% ( 1152) 00:15:48.588 2.933 - 2.947: 40.5351% ( 1161) 00:15:48.588 2.947 - 2.960: 46.6175% ( 1230) 00:15:48.588 2.960 - 2.973: 53.3083% ( 1353) 00:15:48.588 2.973 - 2.987: 61.7001% ( 1697) 00:15:48.588 2.987 - 3.000: 70.1711% ( 1713) 00:15:48.588 3.000 - 3.013: 78.1921% ( 1622) 00:15:48.588 3.013 - 3.027: 85.1943% ( 1416) 00:15:48.588 3.027 - 3.040: 90.5450% ( 1082) 00:15:48.588 3.040 - 3.053: 94.5406% ( 808) 00:15:48.588 3.053 - 3.067: 96.9192% ( 481) 00:15:48.588 3.067 - 3.080: 98.3137% ( 282) 00:15:48.588 3.080 - 3.093: 98.9022% ( 119) 00:15:48.588 3.093 - 3.107: 99.2582% ( 72) 00:15:48.588 3.107 - 3.120: 99.4659% ( 42) 00:15:48.588 3.120 - 3.133: 99.5747% ( 22) 00:15:48.588 3.133 - 3.147: 99.5994% ( 5) 00:15:48.588 3.147 - 3.160: 99.6143% ( 3) 00:15:48.588 3.213 - 3.227: 99.6192% ( 1) 00:15:48.588 3.373 - 3.387: 99.6242% ( 1) 00:15:48.588 3.573 - 3.600: 99.6291% ( 1) 00:15:48.588 3.813 - 3.840: 99.6390% ( 2) 00:15:48.588 4.133 - 4.160: 99.6440% ( 1) 00:15:48.588 4.373 - 4.400: 99.6489% ( 1) 00:15:48.588 4.507 - 4.533: 99.6588% ( 2) 00:15:48.588 4.640 - 4.667: 99.6687% ( 2) 00:15:48.588 4.667 - 4.693: 99.6736% ( 1) 00:15:48.588 4.693 - 4.720: 99.6835% ( 2) 00:15:48.588 4.720 - 4.747: 99.6885% ( 1) 00:15:48.588 4.747 - 4.773: 99.6934% ( 1) 00:15:48.588 4.773 - 4.800: 99.6983% ( 1) 00:15:48.588 4.800 - 4.827: 99.7132% ( 3) 00:15:48.588 4.827 - 4.853: 99.7181% ( 1) 00:15:48.588 4.853 - 4.880: 99.7280% ( 2) 00:15:48.588 4.907 - 4.933: 99.7429% ( 3) 00:15:48.588 4.960 - 4.987: 99.7527% ( 2) 00:15:48.588 4.987 - 5.013: 99.7676% ( 3) 00:15:48.588 5.013 - 5.040: 99.7725% ( 1) 00:15:48.588 5.040 - 5.067: 99.7824% ( 2) 00:15:48.588 5.067 - 5.093: 99.7973% ( 3) 00:15:48.588 5.093 - 5.120: 99.8022% ( 1) 00:15:48.588 5.253 - 5.280: 99.8071% ( 1) 00:15:48.588 5.387 - 5.413: 99.8170% ( 2) 00:15:48.588 5.520 - 5.547: 99.8220% ( 1) 00:15:48.588 5.760 - 5.787: 99.8269% ( 1) 00:15:48.588 5.813 - 5.840: 99.8319% ( 1) 00:15:48.588 5.867 - 5.893: 99.8368% ( 1) 00:15:48.588 6.053 - 6.080: 99.8418% ( 1) 00:15:48.588 6.320 - 6.347: 99.8467% ( 1) 00:15:48.588 6.453 - 6.480: 99.8516% ( 1) 00:15:48.588 6.480 - 6.507: 99.8615% ( 2) 00:15:48.588 6.560 - 6.587: 99.8665% ( 1) 00:15:48.588 6.587 - 6.613: 99.8714% ( 1) 00:15:48.588 6.613 - 6.640: 99.8764% ( 1) 00:15:48.588 6.667 - 6.693: 99.8813% ( 1) 00:15:48.588 6.827 - 6.880: 99.8912% ( 2) 00:15:48.588 7.093 - 7.147: 99.8962% ( 1) 00:15:48.588 7.200 - 7.253: 99.9011% ( 1) 00:15:48.588 9.440 - 9.493: 99.9060% ( 1) 00:15:48.588 84.480 - 84.907: 99.9110% ( 1) 00:15:48.588 3986.773 - 4014.080: 100.0000% ( 18) 00:15:48.588 00:15:48.588 Complete histogram 00:15:48.588 ================== 00:15:48.588 Range in us Cumulative Count 00:15:48.588 1.640 - 1.647: 0.5341% ( 108) 00:15:48.588 1.647 - 1.653: 0.6725% ( 28) 00:15:48.588 1.653 - 1.660: 0.7319% ( 12) 00:15:48.588 1.660 - 1.667: 0.8654% ( 27) 00:15:48.588 1.667 - 1.673: 0.9297% ( 13) 00:15:48.588 1.673 - 1.680: 0.9495% ( 4) 00:15:48.588 1.680 - 1.687: 0.9742% ( 5) 00:15:48.588 1.687 - 1.693: 0.9940% ( 4) 00:15:48.588 1.693 - [2024-11-15 11:39:14.070620] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:48.849 1.700: 34.4575% ( 6767) 00:15:48.849 1.700 - 1.707: 49.3769% ( 3017) 00:15:48.849 1.707 - 1.720: 68.0793% ( 3782) 00:15:48.849 1.720 - 1.733: 78.6173% ( 2131) 00:15:48.849 1.733 - 1.747: 81.7377% ( 631) 00:15:48.849 1.747 - 1.760: 84.1757% ( 493) 00:15:48.849 1.760 - 1.773: 90.4906% ( 1277) 00:15:48.849 1.773 - 1.787: 95.3565% ( 984) 00:15:48.849 1.787 - 1.800: 98.0862% ( 552) 00:15:48.849 1.800 - 1.813: 99.1000% ( 205) 00:15:48.849 1.813 - 1.827: 99.3571% ( 52) 00:15:48.849 1.827 - 1.840: 99.3918% ( 7) 00:15:48.849 1.840 - 1.853: 99.3967% ( 1) 00:15:48.849 1.867 - 1.880: 99.4066% ( 2) 00:15:48.849 1.973 - 1.987: 99.4115% ( 1) 00:15:48.849 3.333 - 3.347: 99.4165% ( 1) 00:15:48.849 3.520 - 3.547: 99.4214% ( 1) 00:15:48.849 3.600 - 3.627: 99.4313% ( 2) 00:15:48.849 3.680 - 3.707: 99.4363% ( 1) 00:15:48.849 3.707 - 3.733: 99.4461% ( 2) 00:15:48.849 3.787 - 3.813: 99.4511% ( 1) 00:15:48.849 3.813 - 3.840: 99.4659% ( 3) 00:15:48.849 3.840 - 3.867: 99.4709% ( 1) 00:15:48.849 4.027 - 4.053: 99.4758% ( 1) 00:15:48.849 4.107 - 4.133: 99.4808% ( 1) 00:15:48.849 4.187 - 4.213: 99.4857% ( 1) 00:15:48.849 4.267 - 4.293: 99.4907% ( 1) 00:15:48.849 4.347 - 4.373: 99.5005% ( 2) 00:15:48.849 4.427 - 4.453: 99.5055% ( 1) 00:15:48.849 4.587 - 4.613: 99.5104% ( 1) 00:15:48.849 4.640 - 4.667: 99.5203% ( 2) 00:15:48.849 4.853 - 4.880: 99.5253% ( 1) 00:15:48.849 4.907 - 4.933: 99.5401% ( 3) 00:15:48.849 5.013 - 5.040: 99.5450% ( 1) 00:15:48.849 5.253 - 5.280: 99.5500% ( 1) 00:15:48.849 5.333 - 5.360: 99.5549% ( 1) 00:15:48.849 5.360 - 5.387: 99.5599% ( 1) 00:15:48.849 5.413 - 5.440: 99.5648% ( 1) 00:15:48.849 5.440 - 5.467: 99.5747% ( 2) 00:15:48.849 5.493 - 5.520: 99.5846% ( 2) 00:15:48.849 5.600 - 5.627: 99.5896% ( 1) 00:15:48.849 5.680 - 5.707: 99.5994% ( 2) 00:15:48.849 6.160 - 6.187: 99.6044% ( 1) 00:15:48.849 6.187 - 6.213: 99.6093% ( 1) 00:15:48.849 7.200 - 7.253: 99.6143% ( 1) 00:15:48.849 9.653 - 9.707: 99.6192% ( 1) 00:15:48.849 33.493 - 33.707: 99.6242% ( 1) 00:15:48.849 132.267 - 133.120: 99.6291% ( 1) 00:15:48.849 3986.773 - 4014.080: 99.9951% ( 74) 00:15:48.849 5980.160 - 6007.467: 100.0000% ( 1) 00:15:48.849 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:48.849 [ 00:15:48.849 { 00:15:48.849 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:48.849 "subtype": "Discovery", 00:15:48.849 "listen_addresses": [], 00:15:48.849 "allow_any_host": true, 00:15:48.849 "hosts": [] 00:15:48.849 }, 00:15:48.849 { 00:15:48.849 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:48.849 "subtype": "NVMe", 00:15:48.849 "listen_addresses": [ 00:15:48.849 { 00:15:48.849 "trtype": "VFIOUSER", 00:15:48.849 "adrfam": "IPv4", 00:15:48.849 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:48.849 "trsvcid": "0" 00:15:48.849 } 00:15:48.849 ], 00:15:48.849 "allow_any_host": true, 00:15:48.849 "hosts": [], 00:15:48.849 "serial_number": "SPDK1", 00:15:48.849 "model_number": "SPDK bdev Controller", 00:15:48.849 "max_namespaces": 32, 00:15:48.849 "min_cntlid": 1, 00:15:48.849 "max_cntlid": 65519, 00:15:48.849 "namespaces": [ 00:15:48.849 { 00:15:48.849 "nsid": 1, 00:15:48.849 "bdev_name": "Malloc1", 00:15:48.849 "name": "Malloc1", 00:15:48.849 "nguid": "CA8460D1EA9F48D483F276C69BB9073F", 00:15:48.849 "uuid": "ca8460d1-ea9f-48d4-83f2-76c69bb9073f" 00:15:48.849 } 00:15:48.849 ] 00:15:48.849 }, 00:15:48.849 { 00:15:48.849 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:48.849 "subtype": "NVMe", 00:15:48.849 "listen_addresses": [ 00:15:48.849 { 00:15:48.849 "trtype": "VFIOUSER", 00:15:48.849 "adrfam": "IPv4", 00:15:48.849 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:48.849 "trsvcid": "0" 00:15:48.849 } 00:15:48.849 ], 00:15:48.849 "allow_any_host": true, 00:15:48.849 "hosts": [], 00:15:48.849 "serial_number": "SPDK2", 00:15:48.849 "model_number": "SPDK bdev Controller", 00:15:48.849 "max_namespaces": 32, 00:15:48.849 "min_cntlid": 1, 00:15:48.849 "max_cntlid": 65519, 00:15:48.849 "namespaces": [ 00:15:48.849 { 00:15:48.849 "nsid": 1, 00:15:48.849 "bdev_name": "Malloc2", 00:15:48.849 "name": "Malloc2", 00:15:48.849 "nguid": "01FB3D2D262A417FAD09CFB988B9C374", 00:15:48.849 "uuid": "01fb3d2d-262a-417f-ad09-cfb988b9c374" 00:15:48.849 } 00:15:48.849 ] 00:15:48.849 } 00:15:48.849 ] 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1028571 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:48.849 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:49.109 [2024-11-15 11:39:14.460962] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:49.109 Malloc3 00:15:49.109 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:49.369 [2024-11-15 11:39:14.647333] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:49.369 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:49.369 Asynchronous Event Request test 00:15:49.369 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:49.369 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:49.369 Registering asynchronous event callbacks... 00:15:49.369 Starting namespace attribute notice tests for all controllers... 00:15:49.369 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:49.369 aer_cb - Changed Namespace 00:15:49.369 Cleaning up... 00:15:49.369 [ 00:15:49.369 { 00:15:49.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:49.369 "subtype": "Discovery", 00:15:49.369 "listen_addresses": [], 00:15:49.369 "allow_any_host": true, 00:15:49.369 "hosts": [] 00:15:49.369 }, 00:15:49.369 { 00:15:49.369 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:49.369 "subtype": "NVMe", 00:15:49.369 "listen_addresses": [ 00:15:49.369 { 00:15:49.369 "trtype": "VFIOUSER", 00:15:49.369 "adrfam": "IPv4", 00:15:49.369 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:49.369 "trsvcid": "0" 00:15:49.369 } 00:15:49.369 ], 00:15:49.369 "allow_any_host": true, 00:15:49.369 "hosts": [], 00:15:49.369 "serial_number": "SPDK1", 00:15:49.369 "model_number": "SPDK bdev Controller", 00:15:49.369 "max_namespaces": 32, 00:15:49.369 "min_cntlid": 1, 00:15:49.369 "max_cntlid": 65519, 00:15:49.369 "namespaces": [ 00:15:49.369 { 00:15:49.369 "nsid": 1, 00:15:49.369 "bdev_name": "Malloc1", 00:15:49.369 "name": "Malloc1", 00:15:49.369 "nguid": "CA8460D1EA9F48D483F276C69BB9073F", 00:15:49.369 "uuid": "ca8460d1-ea9f-48d4-83f2-76c69bb9073f" 00:15:49.369 }, 00:15:49.369 { 00:15:49.369 "nsid": 2, 00:15:49.369 "bdev_name": "Malloc3", 00:15:49.369 "name": "Malloc3", 00:15:49.369 "nguid": "3EA763DC24DE412E894BE263DCA28DC7", 00:15:49.369 "uuid": "3ea763dc-24de-412e-894b-e263dca28dc7" 00:15:49.369 } 00:15:49.369 ] 00:15:49.369 }, 00:15:49.369 { 00:15:49.369 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:49.369 "subtype": "NVMe", 00:15:49.369 "listen_addresses": [ 00:15:49.369 { 00:15:49.369 "trtype": "VFIOUSER", 00:15:49.369 "adrfam": "IPv4", 00:15:49.369 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:49.369 "trsvcid": "0" 00:15:49.369 } 00:15:49.369 ], 00:15:49.369 "allow_any_host": true, 00:15:49.369 "hosts": [], 00:15:49.369 "serial_number": "SPDK2", 00:15:49.369 "model_number": "SPDK bdev Controller", 00:15:49.369 "max_namespaces": 32, 00:15:49.369 "min_cntlid": 1, 00:15:49.369 "max_cntlid": 65519, 00:15:49.369 "namespaces": [ 00:15:49.369 { 00:15:49.369 "nsid": 1, 00:15:49.369 "bdev_name": "Malloc2", 00:15:49.369 "name": "Malloc2", 00:15:49.369 "nguid": "01FB3D2D262A417FAD09CFB988B9C374", 00:15:49.369 "uuid": "01fb3d2d-262a-417f-ad09-cfb988b9c374" 00:15:49.369 } 00:15:49.369 ] 00:15:49.369 } 00:15:49.369 ] 00:15:49.369 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1028571 00:15:49.369 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:49.369 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:49.369 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:49.369 11:39:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:49.631 [2024-11-15 11:39:14.887092] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:15:49.631 [2024-11-15 11:39:14.887137] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1028737 ] 00:15:49.631 [2024-11-15 11:39:14.925765] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:49.631 [2024-11-15 11:39:14.934728] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:49.631 [2024-11-15 11:39:14.934747] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7effe4ca0000 00:15:49.631 [2024-11-15 11:39:14.935726] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.936733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.937738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.938747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.939751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.940759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.941770] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.942775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:49.631 [2024-11-15 11:39:14.943780] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:49.631 [2024-11-15 11:39:14.943787] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7effe4c95000 00:15:49.631 [2024-11-15 11:39:14.944698] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:49.631 [2024-11-15 11:39:14.954072] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:49.631 [2024-11-15 11:39:14.954093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:49.631 [2024-11-15 11:39:14.959170] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:49.631 [2024-11-15 11:39:14.959201] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:49.631 [2024-11-15 11:39:14.959259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:49.631 [2024-11-15 11:39:14.959268] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:49.631 [2024-11-15 11:39:14.959272] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:49.631 [2024-11-15 11:39:14.960172] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:49.631 [2024-11-15 11:39:14.960179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:49.631 [2024-11-15 11:39:14.960184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:49.631 [2024-11-15 11:39:14.961172] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:49.631 [2024-11-15 11:39:14.961179] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:49.631 [2024-11-15 11:39:14.961184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:49.631 [2024-11-15 11:39:14.962178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:49.631 [2024-11-15 11:39:14.962185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:49.631 [2024-11-15 11:39:14.963189] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:49.631 [2024-11-15 11:39:14.963195] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:49.632 [2024-11-15 11:39:14.963199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:49.632 [2024-11-15 11:39:14.963204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:49.632 [2024-11-15 11:39:14.963309] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:49.632 [2024-11-15 11:39:14.963314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:49.632 [2024-11-15 11:39:14.963318] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:49.632 [2024-11-15 11:39:14.964199] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:49.632 [2024-11-15 11:39:14.965203] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:49.632 [2024-11-15 11:39:14.966207] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:49.632 [2024-11-15 11:39:14.967214] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.632 [2024-11-15 11:39:14.967243] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:49.632 [2024-11-15 11:39:14.968225] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:49.632 [2024-11-15 11:39:14.968231] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:49.632 [2024-11-15 11:39:14.968235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.968249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:49.632 [2024-11-15 11:39:14.968255] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.968263] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:49.632 [2024-11-15 11:39:14.968267] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:49.632 [2024-11-15 11:39:14.968269] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.632 [2024-11-15 11:39:14.968277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:14.975571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:14.975580] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:49.632 [2024-11-15 11:39:14.975583] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:49.632 [2024-11-15 11:39:14.975586] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:49.632 [2024-11-15 11:39:14.975589] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:49.632 [2024-11-15 11:39:14.975594] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:49.632 [2024-11-15 11:39:14.975598] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:49.632 [2024-11-15 11:39:14.975601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.975607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.975614] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:14.983568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:14.983577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.632 [2024-11-15 11:39:14.983583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.632 [2024-11-15 11:39:14.983589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.632 [2024-11-15 11:39:14.983595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:49.632 [2024-11-15 11:39:14.983599] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.983604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.983610] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:14.991565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:14.991572] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:49.632 [2024-11-15 11:39:14.991576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.991581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.991585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.991592] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:14.999565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:14.999611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.999617] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:14.999622] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:49.632 [2024-11-15 11:39:14.999625] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:49.632 [2024-11-15 11:39:14.999628] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.632 [2024-11-15 11:39:14.999632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:15.007565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:15.007576] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:49.632 [2024-11-15 11:39:15.007582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.007588] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.007596] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:49.632 [2024-11-15 11:39:15.007599] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:49.632 [2024-11-15 11:39:15.007601] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.632 [2024-11-15 11:39:15.007606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:15.015566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:15.015576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.015582] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.015587] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:49.632 [2024-11-15 11:39:15.015590] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:49.632 [2024-11-15 11:39:15.015593] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.632 [2024-11-15 11:39:15.015597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:15.023566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:15.023573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023583] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023590] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023597] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:49.632 [2024-11-15 11:39:15.023601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:49.632 [2024-11-15 11:39:15.023605] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:49.632 [2024-11-15 11:39:15.023617] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:49.632 [2024-11-15 11:39:15.031566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:49.632 [2024-11-15 11:39:15.031576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:49.633 [2024-11-15 11:39:15.039565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:49.633 [2024-11-15 11:39:15.039576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:49.633 [2024-11-15 11:39:15.047565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:49.633 [2024-11-15 11:39:15.047574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:49.633 [2024-11-15 11:39:15.055566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:49.633 [2024-11-15 11:39:15.055578] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:49.633 [2024-11-15 11:39:15.055581] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:49.633 [2024-11-15 11:39:15.055583] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:49.633 [2024-11-15 11:39:15.055586] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:49.633 [2024-11-15 11:39:15.055588] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:49.633 [2024-11-15 11:39:15.055593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:49.633 [2024-11-15 11:39:15.055599] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:49.633 [2024-11-15 11:39:15.055602] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:49.633 [2024-11-15 11:39:15.055604] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.633 [2024-11-15 11:39:15.055608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:49.633 [2024-11-15 11:39:15.055613] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:49.633 [2024-11-15 11:39:15.055616] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:49.633 [2024-11-15 11:39:15.055619] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.633 [2024-11-15 11:39:15.055623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:49.633 [2024-11-15 11:39:15.055628] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:49.633 [2024-11-15 11:39:15.055631] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:49.633 [2024-11-15 11:39:15.055634] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:49.633 [2024-11-15 11:39:15.055638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:49.633 [2024-11-15 11:39:15.063567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:49.633 [2024-11-15 11:39:15.063577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:49.633 [2024-11-15 11:39:15.063585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:49.633 [2024-11-15 11:39:15.063590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:49.633 ===================================================== 00:15:49.633 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:49.633 ===================================================== 00:15:49.633 Controller Capabilities/Features 00:15:49.633 ================================ 00:15:49.633 Vendor ID: 4e58 00:15:49.633 Subsystem Vendor ID: 4e58 00:15:49.633 Serial Number: SPDK2 00:15:49.633 Model Number: SPDK bdev Controller 00:15:49.633 Firmware Version: 25.01 00:15:49.633 Recommended Arb Burst: 6 00:15:49.633 IEEE OUI Identifier: 8d 6b 50 00:15:49.633 Multi-path I/O 00:15:49.633 May have multiple subsystem ports: Yes 00:15:49.633 May have multiple controllers: Yes 00:15:49.633 Associated with SR-IOV VF: No 00:15:49.633 Max Data Transfer Size: 131072 00:15:49.633 Max Number of Namespaces: 32 00:15:49.633 Max Number of I/O Queues: 127 00:15:49.633 NVMe Specification Version (VS): 1.3 00:15:49.633 NVMe Specification Version (Identify): 1.3 00:15:49.633 Maximum Queue Entries: 256 00:15:49.633 Contiguous Queues Required: Yes 00:15:49.633 Arbitration Mechanisms Supported 00:15:49.633 Weighted Round Robin: Not Supported 00:15:49.633 Vendor Specific: Not Supported 00:15:49.633 Reset Timeout: 15000 ms 00:15:49.633 Doorbell Stride: 4 bytes 00:15:49.633 NVM Subsystem Reset: Not Supported 00:15:49.633 Command Sets Supported 00:15:49.633 NVM Command Set: Supported 00:15:49.633 Boot Partition: Not Supported 00:15:49.633 Memory Page Size Minimum: 4096 bytes 00:15:49.633 Memory Page Size Maximum: 4096 bytes 00:15:49.633 Persistent Memory Region: Not Supported 00:15:49.633 Optional Asynchronous Events Supported 00:15:49.633 Namespace Attribute Notices: Supported 00:15:49.633 Firmware Activation Notices: Not Supported 00:15:49.633 ANA Change Notices: Not Supported 00:15:49.633 PLE Aggregate Log Change Notices: Not Supported 00:15:49.633 LBA Status Info Alert Notices: Not Supported 00:15:49.633 EGE Aggregate Log Change Notices: Not Supported 00:15:49.633 Normal NVM Subsystem Shutdown event: Not Supported 00:15:49.633 Zone Descriptor Change Notices: Not Supported 00:15:49.633 Discovery Log Change Notices: Not Supported 00:15:49.633 Controller Attributes 00:15:49.633 128-bit Host Identifier: Supported 00:15:49.633 Non-Operational Permissive Mode: Not Supported 00:15:49.633 NVM Sets: Not Supported 00:15:49.633 Read Recovery Levels: Not Supported 00:15:49.633 Endurance Groups: Not Supported 00:15:49.633 Predictable Latency Mode: Not Supported 00:15:49.633 Traffic Based Keep ALive: Not Supported 00:15:49.633 Namespace Granularity: Not Supported 00:15:49.633 SQ Associations: Not Supported 00:15:49.633 UUID List: Not Supported 00:15:49.633 Multi-Domain Subsystem: Not Supported 00:15:49.633 Fixed Capacity Management: Not Supported 00:15:49.633 Variable Capacity Management: Not Supported 00:15:49.633 Delete Endurance Group: Not Supported 00:15:49.633 Delete NVM Set: Not Supported 00:15:49.633 Extended LBA Formats Supported: Not Supported 00:15:49.633 Flexible Data Placement Supported: Not Supported 00:15:49.633 00:15:49.633 Controller Memory Buffer Support 00:15:49.633 ================================ 00:15:49.633 Supported: No 00:15:49.633 00:15:49.633 Persistent Memory Region Support 00:15:49.633 ================================ 00:15:49.633 Supported: No 00:15:49.633 00:15:49.633 Admin Command Set Attributes 00:15:49.633 ============================ 00:15:49.633 Security Send/Receive: Not Supported 00:15:49.633 Format NVM: Not Supported 00:15:49.633 Firmware Activate/Download: Not Supported 00:15:49.633 Namespace Management: Not Supported 00:15:49.633 Device Self-Test: Not Supported 00:15:49.633 Directives: Not Supported 00:15:49.633 NVMe-MI: Not Supported 00:15:49.633 Virtualization Management: Not Supported 00:15:49.633 Doorbell Buffer Config: Not Supported 00:15:49.633 Get LBA Status Capability: Not Supported 00:15:49.633 Command & Feature Lockdown Capability: Not Supported 00:15:49.633 Abort Command Limit: 4 00:15:49.633 Async Event Request Limit: 4 00:15:49.633 Number of Firmware Slots: N/A 00:15:49.633 Firmware Slot 1 Read-Only: N/A 00:15:49.633 Firmware Activation Without Reset: N/A 00:15:49.633 Multiple Update Detection Support: N/A 00:15:49.633 Firmware Update Granularity: No Information Provided 00:15:49.633 Per-Namespace SMART Log: No 00:15:49.633 Asymmetric Namespace Access Log Page: Not Supported 00:15:49.633 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:49.633 Command Effects Log Page: Supported 00:15:49.633 Get Log Page Extended Data: Supported 00:15:49.633 Telemetry Log Pages: Not Supported 00:15:49.633 Persistent Event Log Pages: Not Supported 00:15:49.633 Supported Log Pages Log Page: May Support 00:15:49.633 Commands Supported & Effects Log Page: Not Supported 00:15:49.633 Feature Identifiers & Effects Log Page:May Support 00:15:49.633 NVMe-MI Commands & Effects Log Page: May Support 00:15:49.633 Data Area 4 for Telemetry Log: Not Supported 00:15:49.633 Error Log Page Entries Supported: 128 00:15:49.633 Keep Alive: Supported 00:15:49.633 Keep Alive Granularity: 10000 ms 00:15:49.633 00:15:49.633 NVM Command Set Attributes 00:15:49.633 ========================== 00:15:49.633 Submission Queue Entry Size 00:15:49.633 Max: 64 00:15:49.633 Min: 64 00:15:49.633 Completion Queue Entry Size 00:15:49.633 Max: 16 00:15:49.633 Min: 16 00:15:49.633 Number of Namespaces: 32 00:15:49.633 Compare Command: Supported 00:15:49.633 Write Uncorrectable Command: Not Supported 00:15:49.633 Dataset Management Command: Supported 00:15:49.633 Write Zeroes Command: Supported 00:15:49.633 Set Features Save Field: Not Supported 00:15:49.633 Reservations: Not Supported 00:15:49.633 Timestamp: Not Supported 00:15:49.633 Copy: Supported 00:15:49.633 Volatile Write Cache: Present 00:15:49.633 Atomic Write Unit (Normal): 1 00:15:49.633 Atomic Write Unit (PFail): 1 00:15:49.633 Atomic Compare & Write Unit: 1 00:15:49.633 Fused Compare & Write: Supported 00:15:49.633 Scatter-Gather List 00:15:49.633 SGL Command Set: Supported (Dword aligned) 00:15:49.633 SGL Keyed: Not Supported 00:15:49.634 SGL Bit Bucket Descriptor: Not Supported 00:15:49.634 SGL Metadata Pointer: Not Supported 00:15:49.634 Oversized SGL: Not Supported 00:15:49.634 SGL Metadata Address: Not Supported 00:15:49.634 SGL Offset: Not Supported 00:15:49.634 Transport SGL Data Block: Not Supported 00:15:49.634 Replay Protected Memory Block: Not Supported 00:15:49.634 00:15:49.634 Firmware Slot Information 00:15:49.634 ========================= 00:15:49.634 Active slot: 1 00:15:49.634 Slot 1 Firmware Revision: 25.01 00:15:49.634 00:15:49.634 00:15:49.634 Commands Supported and Effects 00:15:49.634 ============================== 00:15:49.634 Admin Commands 00:15:49.634 -------------- 00:15:49.634 Get Log Page (02h): Supported 00:15:49.634 Identify (06h): Supported 00:15:49.634 Abort (08h): Supported 00:15:49.634 Set Features (09h): Supported 00:15:49.634 Get Features (0Ah): Supported 00:15:49.634 Asynchronous Event Request (0Ch): Supported 00:15:49.634 Keep Alive (18h): Supported 00:15:49.634 I/O Commands 00:15:49.634 ------------ 00:15:49.634 Flush (00h): Supported LBA-Change 00:15:49.634 Write (01h): Supported LBA-Change 00:15:49.634 Read (02h): Supported 00:15:49.634 Compare (05h): Supported 00:15:49.634 Write Zeroes (08h): Supported LBA-Change 00:15:49.634 Dataset Management (09h): Supported LBA-Change 00:15:49.634 Copy (19h): Supported LBA-Change 00:15:49.634 00:15:49.634 Error Log 00:15:49.634 ========= 00:15:49.634 00:15:49.634 Arbitration 00:15:49.634 =========== 00:15:49.634 Arbitration Burst: 1 00:15:49.634 00:15:49.634 Power Management 00:15:49.634 ================ 00:15:49.634 Number of Power States: 1 00:15:49.634 Current Power State: Power State #0 00:15:49.634 Power State #0: 00:15:49.634 Max Power: 0.00 W 00:15:49.634 Non-Operational State: Operational 00:15:49.634 Entry Latency: Not Reported 00:15:49.634 Exit Latency: Not Reported 00:15:49.634 Relative Read Throughput: 0 00:15:49.634 Relative Read Latency: 0 00:15:49.634 Relative Write Throughput: 0 00:15:49.634 Relative Write Latency: 0 00:15:49.634 Idle Power: Not Reported 00:15:49.634 Active Power: Not Reported 00:15:49.634 Non-Operational Permissive Mode: Not Supported 00:15:49.634 00:15:49.634 Health Information 00:15:49.634 ================== 00:15:49.634 Critical Warnings: 00:15:49.634 Available Spare Space: OK 00:15:49.634 Temperature: OK 00:15:49.634 Device Reliability: OK 00:15:49.634 Read Only: No 00:15:49.634 Volatile Memory Backup: OK 00:15:49.634 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:49.634 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:49.634 Available Spare: 0% 00:15:49.634 Available Sp[2024-11-15 11:39:15.063664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:49.634 [2024-11-15 11:39:15.071565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:49.634 [2024-11-15 11:39:15.071594] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:49.634 [2024-11-15 11:39:15.071602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.634 [2024-11-15 11:39:15.071607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.634 [2024-11-15 11:39:15.071611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.634 [2024-11-15 11:39:15.071616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:49.634 [2024-11-15 11:39:15.071654] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:49.634 [2024-11-15 11:39:15.071661] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:49.634 [2024-11-15 11:39:15.072658] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.634 [2024-11-15 11:39:15.072694] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:49.634 [2024-11-15 11:39:15.072699] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:49.634 [2024-11-15 11:39:15.073669] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:49.634 [2024-11-15 11:39:15.073678] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:49.634 [2024-11-15 11:39:15.073718] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:49.634 [2024-11-15 11:39:15.074687] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:49.634 are Threshold: 0% 00:15:49.634 Life Percentage Used: 0% 00:15:49.634 Data Units Read: 0 00:15:49.634 Data Units Written: 0 00:15:49.634 Host Read Commands: 0 00:15:49.634 Host Write Commands: 0 00:15:49.634 Controller Busy Time: 0 minutes 00:15:49.634 Power Cycles: 0 00:15:49.634 Power On Hours: 0 hours 00:15:49.634 Unsafe Shutdowns: 0 00:15:49.634 Unrecoverable Media Errors: 0 00:15:49.634 Lifetime Error Log Entries: 0 00:15:49.634 Warning Temperature Time: 0 minutes 00:15:49.634 Critical Temperature Time: 0 minutes 00:15:49.634 00:15:49.634 Number of Queues 00:15:49.634 ================ 00:15:49.634 Number of I/O Submission Queues: 127 00:15:49.634 Number of I/O Completion Queues: 127 00:15:49.634 00:15:49.634 Active Namespaces 00:15:49.634 ================= 00:15:49.634 Namespace ID:1 00:15:49.634 Error Recovery Timeout: Unlimited 00:15:49.634 Command Set Identifier: NVM (00h) 00:15:49.634 Deallocate: Supported 00:15:49.634 Deallocated/Unwritten Error: Not Supported 00:15:49.634 Deallocated Read Value: Unknown 00:15:49.634 Deallocate in Write Zeroes: Not Supported 00:15:49.634 Deallocated Guard Field: 0xFFFF 00:15:49.634 Flush: Supported 00:15:49.634 Reservation: Supported 00:15:49.634 Namespace Sharing Capabilities: Multiple Controllers 00:15:49.634 Size (in LBAs): 131072 (0GiB) 00:15:49.634 Capacity (in LBAs): 131072 (0GiB) 00:15:49.634 Utilization (in LBAs): 131072 (0GiB) 00:15:49.634 NGUID: 01FB3D2D262A417FAD09CFB988B9C374 00:15:49.634 UUID: 01fb3d2d-262a-417f-ad09-cfb988b9c374 00:15:49.634 Thin Provisioning: Not Supported 00:15:49.634 Per-NS Atomic Units: Yes 00:15:49.634 Atomic Boundary Size (Normal): 0 00:15:49.634 Atomic Boundary Size (PFail): 0 00:15:49.634 Atomic Boundary Offset: 0 00:15:49.634 Maximum Single Source Range Length: 65535 00:15:49.634 Maximum Copy Length: 65535 00:15:49.634 Maximum Source Range Count: 1 00:15:49.634 NGUID/EUI64 Never Reused: No 00:15:49.634 Namespace Write Protected: No 00:15:49.634 Number of LBA Formats: 1 00:15:49.634 Current LBA Format: LBA Format #00 00:15:49.634 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:49.634 00:15:49.634 11:39:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:49.894 [2024-11-15 11:39:15.268882] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:55.182 Initializing NVMe Controllers 00:15:55.182 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:55.182 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:55.182 Initialization complete. Launching workers. 00:15:55.182 ======================================================== 00:15:55.182 Latency(us) 00:15:55.183 Device Information : IOPS MiB/s Average min max 00:15:55.183 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39986.53 156.20 3200.94 853.32 8490.87 00:15:55.183 ======================================================== 00:15:55.183 Total : 39986.53 156.20 3200.94 853.32 8490.87 00:15:55.183 00:15:55.183 [2024-11-15 11:39:20.371767] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:55.183 11:39:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:55.183 [2024-11-15 11:39:20.563349] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:00.471 Initializing NVMe Controllers 00:16:00.471 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:00.471 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:00.471 Initialization complete. Launching workers. 00:16:00.471 ======================================================== 00:16:00.471 Latency(us) 00:16:00.471 Device Information : IOPS MiB/s Average min max 00:16:00.471 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40052.68 156.46 3196.03 852.48 6886.05 00:16:00.471 ======================================================== 00:16:00.471 Total : 40052.68 156.46 3196.03 852.48 6886.05 00:16:00.471 00:16:00.471 [2024-11-15 11:39:25.584050] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:00.471 11:39:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:00.471 [2024-11-15 11:39:25.783244] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:05.756 [2024-11-15 11:39:30.918647] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:05.756 Initializing NVMe Controllers 00:16:05.757 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:05.757 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:05.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:05.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:05.757 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:05.757 Initialization complete. Launching workers. 00:16:05.757 Starting thread on core 2 00:16:05.757 Starting thread on core 3 00:16:05.757 Starting thread on core 1 00:16:05.757 11:39:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:05.757 [2024-11-15 11:39:31.165955] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.056 [2024-11-15 11:39:34.222461] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.056 Initializing NVMe Controllers 00:16:09.056 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:09.056 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:09.056 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:09.056 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:09.056 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:09.056 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:09.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:09.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:09.056 Initialization complete. Launching workers. 00:16:09.056 Starting thread on core 1 with urgent priority queue 00:16:09.056 Starting thread on core 2 with urgent priority queue 00:16:09.056 Starting thread on core 3 with urgent priority queue 00:16:09.056 Starting thread on core 0 with urgent priority queue 00:16:09.056 SPDK bdev Controller (SPDK2 ) core 0: 16180.00 IO/s 6.18 secs/100000 ios 00:16:09.056 SPDK bdev Controller (SPDK2 ) core 1: 11557.67 IO/s 8.65 secs/100000 ios 00:16:09.056 SPDK bdev Controller (SPDK2 ) core 2: 7971.67 IO/s 12.54 secs/100000 ios 00:16:09.056 SPDK bdev Controller (SPDK2 ) core 3: 14757.67 IO/s 6.78 secs/100000 ios 00:16:09.056 ======================================================== 00:16:09.056 00:16:09.056 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:09.056 [2024-11-15 11:39:34.462965] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:09.056 Initializing NVMe Controllers 00:16:09.056 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:09.056 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:09.056 Namespace ID: 1 size: 0GB 00:16:09.056 Initialization complete. 00:16:09.056 INFO: using host memory buffer for IO 00:16:09.056 Hello world! 00:16:09.056 [2024-11-15 11:39:34.473037] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:09.056 11:39:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:09.317 [2024-11-15 11:39:34.707429] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.704 Initializing NVMe Controllers 00:16:10.704 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.704 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.704 Initialization complete. Launching workers. 00:16:10.704 submit (in ns) avg, min, max = 5340.9, 2817.5, 3998868.3 00:16:10.704 complete (in ns) avg, min, max = 16767.4, 1630.0, 4995078.3 00:16:10.704 00:16:10.704 Submit histogram 00:16:10.704 ================ 00:16:10.704 Range in us Cumulative Count 00:16:10.704 2.813 - 2.827: 0.2026% ( 41) 00:16:10.704 2.827 - 2.840: 1.0770% ( 177) 00:16:10.704 2.840 - 2.853: 3.6559% ( 522) 00:16:10.704 2.853 - 2.867: 8.0233% ( 884) 00:16:10.704 2.867 - 2.880: 12.8996% ( 987) 00:16:10.704 2.880 - 2.893: 18.1809% ( 1069) 00:16:10.704 2.893 - 2.907: 24.0453% ( 1187) 00:16:10.704 2.907 - 2.920: 28.8029% ( 963) 00:16:10.704 2.920 - 2.933: 34.0596% ( 1064) 00:16:10.704 2.933 - 2.947: 39.9140% ( 1185) 00:16:10.704 2.947 - 2.960: 45.5066% ( 1132) 00:16:10.704 2.960 - 2.973: 51.8304% ( 1280) 00:16:10.704 2.973 - 2.987: 59.9279% ( 1639) 00:16:10.704 2.987 - 3.000: 68.5934% ( 1754) 00:16:10.704 3.000 - 3.013: 77.3331% ( 1769) 00:16:10.704 3.013 - 3.027: 84.2399% ( 1398) 00:16:10.704 3.027 - 3.040: 90.0499% ( 1176) 00:16:10.704 3.040 - 3.053: 94.1801% ( 836) 00:16:10.704 3.053 - 3.067: 96.7936% ( 529) 00:16:10.704 3.067 - 3.080: 98.2807% ( 301) 00:16:10.704 3.080 - 3.093: 98.9971% ( 145) 00:16:10.704 3.093 - 3.107: 99.2836% ( 58) 00:16:10.704 3.107 - 3.120: 99.4071% ( 25) 00:16:10.704 3.120 - 3.133: 99.4862% ( 16) 00:16:10.704 3.133 - 3.147: 99.5158% ( 6) 00:16:10.704 3.147 - 3.160: 99.5307% ( 3) 00:16:10.704 3.160 - 3.173: 99.5356% ( 1) 00:16:10.704 3.347 - 3.360: 99.5405% ( 1) 00:16:10.704 3.360 - 3.373: 99.5455% ( 1) 00:16:10.704 3.413 - 3.440: 99.5554% ( 2) 00:16:10.704 3.493 - 3.520: 99.5603% ( 1) 00:16:10.704 3.520 - 3.547: 99.5652% ( 1) 00:16:10.704 3.547 - 3.573: 99.5702% ( 1) 00:16:10.704 3.760 - 3.787: 99.5751% ( 1) 00:16:10.704 3.787 - 3.813: 99.5801% ( 1) 00:16:10.704 4.027 - 4.053: 99.5850% ( 1) 00:16:10.704 4.080 - 4.107: 99.5899% ( 1) 00:16:10.704 4.240 - 4.267: 99.5949% ( 1) 00:16:10.704 4.293 - 4.320: 99.5998% ( 1) 00:16:10.704 4.347 - 4.373: 99.6048% ( 1) 00:16:10.704 4.453 - 4.480: 99.6097% ( 1) 00:16:10.704 4.853 - 4.880: 99.6146% ( 1) 00:16:10.704 4.880 - 4.907: 99.6245% ( 2) 00:16:10.704 4.907 - 4.933: 99.6393% ( 3) 00:16:10.704 4.933 - 4.960: 99.6492% ( 2) 00:16:10.704 4.987 - 5.013: 99.6542% ( 1) 00:16:10.704 5.040 - 5.067: 99.6640% ( 2) 00:16:10.704 5.093 - 5.120: 99.6690% ( 1) 00:16:10.704 5.120 - 5.147: 99.6739% ( 1) 00:16:10.704 5.200 - 5.227: 99.6789% ( 1) 00:16:10.704 5.547 - 5.573: 99.6838% ( 1) 00:16:10.704 5.573 - 5.600: 99.6937% ( 2) 00:16:10.704 5.600 - 5.627: 99.6986% ( 1) 00:16:10.704 5.707 - 5.733: 99.7036% ( 1) 00:16:10.704 5.733 - 5.760: 99.7135% ( 2) 00:16:10.704 5.760 - 5.787: 99.7184% ( 1) 00:16:10.704 5.787 - 5.813: 99.7233% ( 1) 00:16:10.704 5.840 - 5.867: 99.7283% ( 1) 00:16:10.704 5.867 - 5.893: 99.7332% ( 1) 00:16:10.704 5.893 - 5.920: 99.7431% ( 2) 00:16:10.704 6.000 - 6.027: 99.7530% ( 2) 00:16:10.704 6.027 - 6.053: 99.7579% ( 1) 00:16:10.704 6.053 - 6.080: 99.7629% ( 1) 00:16:10.704 6.133 - 6.160: 99.7777% ( 3) 00:16:10.704 6.160 - 6.187: 99.7826% ( 1) 00:16:10.704 6.187 - 6.213: 99.7876% ( 1) 00:16:10.704 6.213 - 6.240: 99.7925% ( 1) 00:16:10.704 6.240 - 6.267: 99.7974% ( 1) 00:16:10.704 6.267 - 6.293: 99.8024% ( 1) 00:16:10.704 6.320 - 6.347: 99.8073% ( 1) 00:16:10.704 6.373 - 6.400: 99.8172% ( 2) 00:16:10.704 6.507 - 6.533: 99.8221% ( 1) 00:16:10.704 6.560 - 6.587: 99.8271% ( 1) 00:16:10.704 6.587 - 6.613: 99.8320% ( 1) 00:16:10.704 6.667 - 6.693: 99.8419% ( 2) 00:16:10.704 6.693 - 6.720: 99.8567% ( 3) 00:16:10.704 6.747 - 6.773: 99.8666% ( 2) 00:16:10.704 6.773 - 6.800: 99.8715% ( 1) 00:16:10.704 6.800 - 6.827: 99.8765% ( 1) 00:16:10.704 [2024-11-15 11:39:35.801105] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.704 6.827 - 6.880: 99.8864% ( 2) 00:16:10.704 6.933 - 6.987: 99.8913% ( 1) 00:16:10.704 7.040 - 7.093: 99.8963% ( 1) 00:16:10.704 7.093 - 7.147: 99.9012% ( 1) 00:16:10.704 7.147 - 7.200: 99.9111% ( 2) 00:16:10.704 7.520 - 7.573: 99.9160% ( 1) 00:16:10.704 8.053 - 8.107: 99.9210% ( 1) 00:16:10.704 8.107 - 8.160: 99.9259% ( 1) 00:16:10.704 10.827 - 10.880: 99.9308% ( 1) 00:16:10.704 11.787 - 11.840: 99.9358% ( 1) 00:16:10.704 11.947 - 12.000: 99.9407% ( 1) 00:16:10.704 3986.773 - 4014.080: 100.0000% ( 12) 00:16:10.704 00:16:10.704 Complete histogram 00:16:10.704 ================== 00:16:10.704 Range in us Cumulative Count 00:16:10.704 1.627 - 1.633: 0.0049% ( 1) 00:16:10.704 1.640 - 1.647: 0.4249% ( 85) 00:16:10.704 1.647 - 1.653: 0.7510% ( 66) 00:16:10.704 1.653 - 1.660: 0.8498% ( 20) 00:16:10.704 1.660 - 1.667: 1.0079% ( 32) 00:16:10.704 1.667 - 1.673: 1.1017% ( 19) 00:16:10.704 1.673 - 1.680: 1.1067% ( 1) 00:16:10.704 1.680 - 1.687: 1.1165% ( 2) 00:16:10.704 1.687 - 1.693: 1.1363% ( 4) 00:16:10.704 1.693 - 1.700: 34.8303% ( 6820) 00:16:10.704 1.700 - 1.707: 49.7456% ( 3019) 00:16:10.704 1.707 - 1.720: 68.6527% ( 3827) 00:16:10.704 1.720 - 1.733: 79.8034% ( 2257) 00:16:10.704 1.733 - 1.747: 83.2716% ( 702) 00:16:10.704 1.747 - 1.760: 85.4750% ( 446) 00:16:10.704 1.760 - 1.773: 90.2426% ( 965) 00:16:10.704 1.773 - 1.787: 95.1238% ( 988) 00:16:10.704 1.787 - 1.800: 97.7570% ( 533) 00:16:10.704 1.800 - 1.813: 99.0317% ( 258) 00:16:10.704 1.813 - 1.827: 99.3577% ( 66) 00:16:10.704 1.827 - 1.840: 99.3824% ( 5) 00:16:10.704 2.027 - 2.040: 99.3874% ( 1) 00:16:10.704 3.920 - 3.947: 99.3923% ( 1) 00:16:10.704 4.107 - 4.133: 99.4022% ( 2) 00:16:10.704 4.133 - 4.160: 99.4121% ( 2) 00:16:10.704 4.373 - 4.400: 99.4170% ( 1) 00:16:10.704 4.400 - 4.427: 99.4220% ( 1) 00:16:10.704 4.480 - 4.507: 99.4269% ( 1) 00:16:10.704 4.507 - 4.533: 99.4318% ( 1) 00:16:10.704 4.560 - 4.587: 99.4417% ( 2) 00:16:10.704 4.587 - 4.613: 99.4516% ( 2) 00:16:10.704 4.667 - 4.693: 99.4615% ( 2) 00:16:10.704 4.693 - 4.720: 99.4664% ( 1) 00:16:10.704 4.720 - 4.747: 99.4714% ( 1) 00:16:10.704 4.773 - 4.800: 99.4763% ( 1) 00:16:10.704 4.827 - 4.853: 99.4813% ( 1) 00:16:10.704 4.880 - 4.907: 99.4862% ( 1) 00:16:10.704 4.907 - 4.933: 99.4961% ( 2) 00:16:10.704 4.933 - 4.960: 99.5060% ( 2) 00:16:10.704 4.960 - 4.987: 99.5109% ( 1) 00:16:10.704 5.013 - 5.040: 99.5208% ( 2) 00:16:10.704 5.040 - 5.067: 99.5257% ( 1) 00:16:10.704 5.067 - 5.093: 99.5356% ( 2) 00:16:10.704 5.120 - 5.147: 99.5405% ( 1) 00:16:10.704 5.227 - 5.253: 99.5455% ( 1) 00:16:10.704 5.333 - 5.360: 99.5504% ( 1) 00:16:10.704 5.387 - 5.413: 99.5554% ( 1) 00:16:10.704 5.440 - 5.467: 99.5603% ( 1) 00:16:10.704 5.600 - 5.627: 99.5652% ( 1) 00:16:10.704 5.680 - 5.707: 99.5702% ( 1) 00:16:10.704 5.840 - 5.867: 99.5751% ( 1) 00:16:10.704 5.973 - 6.000: 99.5801% ( 1) 00:16:10.704 6.053 - 6.080: 99.5899% ( 2) 00:16:10.704 6.107 - 6.133: 99.5949% ( 1) 00:16:10.705 6.213 - 6.240: 99.5998% ( 1) 00:16:10.705 6.400 - 6.427: 99.6048% ( 1) 00:16:10.705 6.693 - 6.720: 99.6097% ( 1) 00:16:10.705 7.307 - 7.360: 99.6146% ( 1) 00:16:10.705 7.573 - 7.627: 99.6196% ( 1) 00:16:10.705 9.333 - 9.387: 99.6245% ( 1) 00:16:10.705 3795.627 - 3822.933: 99.6295% ( 1) 00:16:10.705 3986.773 - 4014.080: 99.9951% ( 74) 00:16:10.705 4969.813 - 4997.120: 100.0000% ( 1) 00:16:10.705 00:16:10.705 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:10.705 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:10.705 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:10.705 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:10.705 11:39:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:10.705 [ 00:16:10.705 { 00:16:10.705 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:10.705 "subtype": "Discovery", 00:16:10.705 "listen_addresses": [], 00:16:10.705 "allow_any_host": true, 00:16:10.705 "hosts": [] 00:16:10.705 }, 00:16:10.705 { 00:16:10.705 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:10.705 "subtype": "NVMe", 00:16:10.705 "listen_addresses": [ 00:16:10.705 { 00:16:10.705 "trtype": "VFIOUSER", 00:16:10.705 "adrfam": "IPv4", 00:16:10.705 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:10.705 "trsvcid": "0" 00:16:10.705 } 00:16:10.705 ], 00:16:10.705 "allow_any_host": true, 00:16:10.705 "hosts": [], 00:16:10.705 "serial_number": "SPDK1", 00:16:10.705 "model_number": "SPDK bdev Controller", 00:16:10.705 "max_namespaces": 32, 00:16:10.705 "min_cntlid": 1, 00:16:10.705 "max_cntlid": 65519, 00:16:10.705 "namespaces": [ 00:16:10.705 { 00:16:10.705 "nsid": 1, 00:16:10.705 "bdev_name": "Malloc1", 00:16:10.705 "name": "Malloc1", 00:16:10.705 "nguid": "CA8460D1EA9F48D483F276C69BB9073F", 00:16:10.705 "uuid": "ca8460d1-ea9f-48d4-83f2-76c69bb9073f" 00:16:10.705 }, 00:16:10.705 { 00:16:10.705 "nsid": 2, 00:16:10.705 "bdev_name": "Malloc3", 00:16:10.705 "name": "Malloc3", 00:16:10.705 "nguid": "3EA763DC24DE412E894BE263DCA28DC7", 00:16:10.705 "uuid": "3ea763dc-24de-412e-894b-e263dca28dc7" 00:16:10.705 } 00:16:10.705 ] 00:16:10.705 }, 00:16:10.705 { 00:16:10.705 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:10.705 "subtype": "NVMe", 00:16:10.705 "listen_addresses": [ 00:16:10.705 { 00:16:10.705 "trtype": "VFIOUSER", 00:16:10.705 "adrfam": "IPv4", 00:16:10.705 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:10.705 "trsvcid": "0" 00:16:10.705 } 00:16:10.705 ], 00:16:10.705 "allow_any_host": true, 00:16:10.705 "hosts": [], 00:16:10.705 "serial_number": "SPDK2", 00:16:10.705 "model_number": "SPDK bdev Controller", 00:16:10.705 "max_namespaces": 32, 00:16:10.705 "min_cntlid": 1, 00:16:10.705 "max_cntlid": 65519, 00:16:10.705 "namespaces": [ 00:16:10.705 { 00:16:10.705 "nsid": 1, 00:16:10.705 "bdev_name": "Malloc2", 00:16:10.705 "name": "Malloc2", 00:16:10.705 "nguid": "01FB3D2D262A417FAD09CFB988B9C374", 00:16:10.705 "uuid": "01fb3d2d-262a-417f-ad09-cfb988b9c374" 00:16:10.705 } 00:16:10.705 ] 00:16:10.705 } 00:16:10.705 ] 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1032763 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:10.705 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:10.705 [2024-11-15 11:39:36.172466] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:10.705 Malloc4 00:16:10.966 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:10.966 [2024-11-15 11:39:36.369806] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:10.966 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:10.966 Asynchronous Event Request test 00:16:10.966 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.966 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:10.966 Registering asynchronous event callbacks... 00:16:10.966 Starting namespace attribute notice tests for all controllers... 00:16:10.966 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:10.966 aer_cb - Changed Namespace 00:16:10.966 Cleaning up... 00:16:11.227 [ 00:16:11.227 { 00:16:11.227 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:11.227 "subtype": "Discovery", 00:16:11.227 "listen_addresses": [], 00:16:11.227 "allow_any_host": true, 00:16:11.227 "hosts": [] 00:16:11.227 }, 00:16:11.227 { 00:16:11.227 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:11.227 "subtype": "NVMe", 00:16:11.227 "listen_addresses": [ 00:16:11.227 { 00:16:11.227 "trtype": "VFIOUSER", 00:16:11.227 "adrfam": "IPv4", 00:16:11.227 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:11.227 "trsvcid": "0" 00:16:11.227 } 00:16:11.227 ], 00:16:11.227 "allow_any_host": true, 00:16:11.227 "hosts": [], 00:16:11.227 "serial_number": "SPDK1", 00:16:11.227 "model_number": "SPDK bdev Controller", 00:16:11.227 "max_namespaces": 32, 00:16:11.227 "min_cntlid": 1, 00:16:11.227 "max_cntlid": 65519, 00:16:11.227 "namespaces": [ 00:16:11.227 { 00:16:11.227 "nsid": 1, 00:16:11.227 "bdev_name": "Malloc1", 00:16:11.227 "name": "Malloc1", 00:16:11.227 "nguid": "CA8460D1EA9F48D483F276C69BB9073F", 00:16:11.227 "uuid": "ca8460d1-ea9f-48d4-83f2-76c69bb9073f" 00:16:11.227 }, 00:16:11.227 { 00:16:11.227 "nsid": 2, 00:16:11.227 "bdev_name": "Malloc3", 00:16:11.227 "name": "Malloc3", 00:16:11.227 "nguid": "3EA763DC24DE412E894BE263DCA28DC7", 00:16:11.227 "uuid": "3ea763dc-24de-412e-894b-e263dca28dc7" 00:16:11.227 } 00:16:11.227 ] 00:16:11.227 }, 00:16:11.227 { 00:16:11.227 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:11.227 "subtype": "NVMe", 00:16:11.227 "listen_addresses": [ 00:16:11.227 { 00:16:11.227 "trtype": "VFIOUSER", 00:16:11.227 "adrfam": "IPv4", 00:16:11.227 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:11.227 "trsvcid": "0" 00:16:11.227 } 00:16:11.227 ], 00:16:11.227 "allow_any_host": true, 00:16:11.227 "hosts": [], 00:16:11.227 "serial_number": "SPDK2", 00:16:11.227 "model_number": "SPDK bdev Controller", 00:16:11.227 "max_namespaces": 32, 00:16:11.227 "min_cntlid": 1, 00:16:11.227 "max_cntlid": 65519, 00:16:11.227 "namespaces": [ 00:16:11.227 { 00:16:11.227 "nsid": 1, 00:16:11.227 "bdev_name": "Malloc2", 00:16:11.227 "name": "Malloc2", 00:16:11.227 "nguid": "01FB3D2D262A417FAD09CFB988B9C374", 00:16:11.227 "uuid": "01fb3d2d-262a-417f-ad09-cfb988b9c374" 00:16:11.227 }, 00:16:11.227 { 00:16:11.227 "nsid": 2, 00:16:11.227 "bdev_name": "Malloc4", 00:16:11.227 "name": "Malloc4", 00:16:11.227 "nguid": "D4E821D2402C48D0B13509A5542A0B91", 00:16:11.227 "uuid": "d4e821d2-402c-48d0-b135-09a5542a0b91" 00:16:11.227 } 00:16:11.227 ] 00:16:11.227 } 00:16:11.227 ] 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1032763 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1023433 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1023433 ']' 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1023433 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1023433 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1023433' 00:16:11.227 killing process with pid 1023433 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1023433 00:16:11.227 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1023433 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1033009 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1033009' 00:16:11.488 Process pid: 1033009 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1033009 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 1033009 ']' 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:11.488 11:39:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:11.488 [2024-11-15 11:39:36.847465] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:11.488 [2024-11-15 11:39:36.848409] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:11.488 [2024-11-15 11:39:36.848453] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.488 [2024-11-15 11:39:36.934248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:11.488 [2024-11-15 11:39:36.969455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:11.488 [2024-11-15 11:39:36.969492] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:11.488 [2024-11-15 11:39:36.969498] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:11.488 [2024-11-15 11:39:36.969503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:11.488 [2024-11-15 11:39:36.969508] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:11.488 [2024-11-15 11:39:36.970904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:11.488 [2024-11-15 11:39:36.970950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:11.488 [2024-11-15 11:39:36.971062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.488 [2024-11-15 11:39:36.971065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:11.749 [2024-11-15 11:39:37.025033] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:11.749 [2024-11-15 11:39:37.026264] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:16:11.749 [2024-11-15 11:39:37.027242] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:16:11.749 [2024-11-15 11:39:37.027776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:16:11.749 [2024-11-15 11:39:37.027790] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:16:12.321 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:12.321 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:16:12.321 11:39:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:13.264 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:13.525 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:13.525 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:13.525 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:13.525 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:13.525 11:39:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:13.786 Malloc1 00:16:13.786 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:13.786 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:14.047 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:14.307 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:14.307 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:14.307 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:14.568 Malloc2 00:16:14.568 11:39:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:14.568 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:14.828 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1033009 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 1033009 ']' 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 1033009 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1033009 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1033009' 00:16:15.089 killing process with pid 1033009 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 1033009 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 1033009 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:15.089 00:16:15.089 real 0m50.265s 00:16:15.089 user 3m12.478s 00:16:15.089 sys 0m2.590s 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:15.089 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:15.089 ************************************ 00:16:15.089 END TEST nvmf_vfio_user 00:16:15.089 ************************************ 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:15.351 ************************************ 00:16:15.351 START TEST nvmf_vfio_user_nvme_compliance 00:16:15.351 ************************************ 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:15.351 * Looking for test storage... 00:16:15.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:15.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.351 --rc genhtml_branch_coverage=1 00:16:15.351 --rc genhtml_function_coverage=1 00:16:15.351 --rc genhtml_legend=1 00:16:15.351 --rc geninfo_all_blocks=1 00:16:15.351 --rc geninfo_unexecuted_blocks=1 00:16:15.351 00:16:15.351 ' 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:15.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.351 --rc genhtml_branch_coverage=1 00:16:15.351 --rc genhtml_function_coverage=1 00:16:15.351 --rc genhtml_legend=1 00:16:15.351 --rc geninfo_all_blocks=1 00:16:15.351 --rc geninfo_unexecuted_blocks=1 00:16:15.351 00:16:15.351 ' 00:16:15.351 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:15.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.351 --rc genhtml_branch_coverage=1 00:16:15.351 --rc genhtml_function_coverage=1 00:16:15.351 --rc genhtml_legend=1 00:16:15.351 --rc geninfo_all_blocks=1 00:16:15.351 --rc geninfo_unexecuted_blocks=1 00:16:15.351 00:16:15.351 ' 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:15.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:15.613 --rc genhtml_branch_coverage=1 00:16:15.613 --rc genhtml_function_coverage=1 00:16:15.613 --rc genhtml_legend=1 00:16:15.613 --rc geninfo_all_blocks=1 00:16:15.613 --rc geninfo_unexecuted_blocks=1 00:16:15.613 00:16:15.613 ' 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:15.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.613 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1033861 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1033861' 00:16:15.614 Process pid: 1033861 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1033861 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 1033861 ']' 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:15.614 11:39:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:15.614 [2024-11-15 11:39:40.944783] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:16:15.614 [2024-11-15 11:39:40.944886] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.614 [2024-11-15 11:39:41.035623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:15.614 [2024-11-15 11:39:41.065869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.614 [2024-11-15 11:39:41.065897] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.614 [2024-11-15 11:39:41.065904] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:15.614 [2024-11-15 11:39:41.065912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:15.614 [2024-11-15 11:39:41.065916] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.614 [2024-11-15 11:39:41.067115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:15.614 [2024-11-15 11:39:41.067227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:15.614 [2024-11-15 11:39:41.067224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:16.557 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:16.557 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:16:16.557 11:39:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.499 malloc0 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.499 11:39:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:17.499 00:16:17.499 00:16:17.499 CUnit - A unit testing framework for C - Version 2.1-3 00:16:17.499 http://cunit.sourceforge.net/ 00:16:17.499 00:16:17.499 00:16:17.499 Suite: nvme_compliance 00:16:17.760 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-15 11:39:42.998958] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.760 [2024-11-15 11:39:43.000246] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:17.760 [2024-11-15 11:39:43.000257] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:17.760 [2024-11-15 11:39:43.000262] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:17.760 [2024-11-15 11:39:43.003984] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.760 passed 00:16:17.760 Test: admin_identify_ctrlr_verify_fused ...[2024-11-15 11:39:43.080492] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.760 [2024-11-15 11:39:43.083511] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:17.760 passed 00:16:17.760 Test: admin_identify_ns ...[2024-11-15 11:39:43.163413] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:17.760 [2024-11-15 11:39:43.224572] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:17.760 [2024-11-15 11:39:43.232574] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:17.760 [2024-11-15 11:39:43.253648] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.020 passed 00:16:18.020 Test: admin_get_features_mandatory_features ...[2024-11-15 11:39:43.326886] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.020 [2024-11-15 11:39:43.329912] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.020 passed 00:16:18.020 Test: admin_get_features_optional_features ...[2024-11-15 11:39:43.405399] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.020 [2024-11-15 11:39:43.408417] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.020 passed 00:16:18.020 Test: admin_set_features_number_of_queues ...[2024-11-15 11:39:43.485461] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.280 [2024-11-15 11:39:43.606679] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.280 passed 00:16:18.280 Test: admin_get_log_page_mandatory_logs ...[2024-11-15 11:39:43.677913] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.280 [2024-11-15 11:39:43.680935] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.280 passed 00:16:18.280 Test: admin_get_log_page_with_lpo ...[2024-11-15 11:39:43.756662] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.541 [2024-11-15 11:39:43.826572] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:18.541 [2024-11-15 11:39:43.839625] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.541 passed 00:16:18.541 Test: fabric_property_get ...[2024-11-15 11:39:43.914831] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.541 [2024-11-15 11:39:43.916037] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:16:18.541 [2024-11-15 11:39:43.917856] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.541 passed 00:16:18.541 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-15 11:39:43.994320] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.541 [2024-11-15 11:39:43.995521] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:18.541 [2024-11-15 11:39:43.997341] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.541 passed 00:16:18.801 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-15 11:39:44.074137] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.801 [2024-11-15 11:39:44.157568] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:18.801 [2024-11-15 11:39:44.172568] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:18.801 [2024-11-15 11:39:44.177638] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.801 passed 00:16:18.801 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-15 11:39:44.252697] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:18.801 [2024-11-15 11:39:44.253894] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:18.801 [2024-11-15 11:39:44.255716] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:18.801 passed 00:16:19.061 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-15 11:39:44.333855] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.061 [2024-11-15 11:39:44.409569] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:19.061 [2024-11-15 11:39:44.433570] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:19.061 [2024-11-15 11:39:44.438636] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.061 passed 00:16:19.061 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-15 11:39:44.512796] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.061 [2024-11-15 11:39:44.513995] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:19.061 [2024-11-15 11:39:44.514013] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:19.061 [2024-11-15 11:39:44.517822] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.061 passed 00:16:19.326 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-15 11:39:44.592551] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.326 [2024-11-15 11:39:44.685570] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:19.326 [2024-11-15 11:39:44.693568] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:19.326 [2024-11-15 11:39:44.701573] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:19.326 [2024-11-15 11:39:44.709567] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:19.326 [2024-11-15 11:39:44.738640] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.326 passed 00:16:19.326 Test: admin_create_io_sq_verify_pc ...[2024-11-15 11:39:44.813843] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:19.586 [2024-11-15 11:39:44.830578] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:19.586 [2024-11-15 11:39:44.847996] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:19.586 passed 00:16:19.586 Test: admin_create_io_qp_max_qps ...[2024-11-15 11:39:44.931494] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:20.968 [2024-11-15 11:39:46.040570] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:16:20.968 [2024-11-15 11:39:46.423080] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:20.969 passed 00:16:21.229 Test: admin_create_io_sq_shared_cq ...[2024-11-15 11:39:46.503466] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:21.229 [2024-11-15 11:39:46.636573] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:21.229 [2024-11-15 11:39:46.673617] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:21.229 passed 00:16:21.229 00:16:21.229 Run Summary: Type Total Ran Passed Failed Inactive 00:16:21.229 suites 1 1 n/a 0 0 00:16:21.229 tests 18 18 18 0 0 00:16:21.229 asserts 360 360 360 0 n/a 00:16:21.229 00:16:21.229 Elapsed time = 1.516 seconds 00:16:21.229 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1033861 00:16:21.229 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 1033861 ']' 00:16:21.229 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 1033861 00:16:21.229 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:16:21.229 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:21.229 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1033861 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1033861' 00:16:21.489 killing process with pid 1033861 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 1033861 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 1033861 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:21.489 00:16:21.489 real 0m6.244s 00:16:21.489 user 0m17.684s 00:16:21.489 sys 0m0.540s 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:21.489 ************************************ 00:16:21.489 END TEST nvmf_vfio_user_nvme_compliance 00:16:21.489 ************************************ 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:21.489 ************************************ 00:16:21.489 START TEST nvmf_vfio_user_fuzz 00:16:21.489 ************************************ 00:16:21.489 11:39:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:21.751 * Looking for test storage... 00:16:21.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:21.751 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.752 --rc genhtml_branch_coverage=1 00:16:21.752 --rc genhtml_function_coverage=1 00:16:21.752 --rc genhtml_legend=1 00:16:21.752 --rc geninfo_all_blocks=1 00:16:21.752 --rc geninfo_unexecuted_blocks=1 00:16:21.752 00:16:21.752 ' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.752 --rc genhtml_branch_coverage=1 00:16:21.752 --rc genhtml_function_coverage=1 00:16:21.752 --rc genhtml_legend=1 00:16:21.752 --rc geninfo_all_blocks=1 00:16:21.752 --rc geninfo_unexecuted_blocks=1 00:16:21.752 00:16:21.752 ' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.752 --rc genhtml_branch_coverage=1 00:16:21.752 --rc genhtml_function_coverage=1 00:16:21.752 --rc genhtml_legend=1 00:16:21.752 --rc geninfo_all_blocks=1 00:16:21.752 --rc geninfo_unexecuted_blocks=1 00:16:21.752 00:16:21.752 ' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:21.752 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:21.752 --rc genhtml_branch_coverage=1 00:16:21.752 --rc genhtml_function_coverage=1 00:16:21.752 --rc genhtml_legend=1 00:16:21.752 --rc geninfo_all_blocks=1 00:16:21.752 --rc geninfo_unexecuted_blocks=1 00:16:21.752 00:16:21.752 ' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:21.752 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1035165 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1035165' 00:16:21.752 Process pid: 1035165 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1035165 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 1035165 ']' 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:21.752 11:39:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:22.693 11:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:22.693 11:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:16:22.693 11:39:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.632 malloc0 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.632 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.633 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:23.892 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.892 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:23.892 11:39:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:56.011 Fuzzing completed. Shutting down the fuzz application 00:16:56.011 00:16:56.011 Dumping successful admin opcodes: 00:16:56.011 8, 9, 10, 24, 00:16:56.011 Dumping successful io opcodes: 00:16:56.011 0, 00:16:56.011 NS: 0x20000081ef00 I/O qp, Total commands completed: 1421836, total successful commands: 5589, random_seed: 3354668352 00:16:56.011 NS: 0x20000081ef00 admin qp, Total commands completed: 353974, total successful commands: 2851, random_seed: 2618560896 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1035165 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 1035165 ']' 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 1035165 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1035165 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1035165' 00:16:56.011 killing process with pid 1035165 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 1035165 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 1035165 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:56.011 00:16:56.011 real 0m32.818s 00:16:56.011 user 0m37.392s 00:16:56.011 sys 0m24.548s 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:56.011 ************************************ 00:16:56.011 END TEST nvmf_vfio_user_fuzz 00:16:56.011 ************************************ 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.011 ************************************ 00:16:56.011 START TEST nvmf_auth_target 00:16:56.011 ************************************ 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:56.011 * Looking for test storage... 00:16:56.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:56.011 11:40:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:56.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.011 --rc genhtml_branch_coverage=1 00:16:56.011 --rc genhtml_function_coverage=1 00:16:56.011 --rc genhtml_legend=1 00:16:56.011 --rc geninfo_all_blocks=1 00:16:56.011 --rc geninfo_unexecuted_blocks=1 00:16:56.011 00:16:56.011 ' 00:16:56.011 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:56.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.011 --rc genhtml_branch_coverage=1 00:16:56.011 --rc genhtml_function_coverage=1 00:16:56.011 --rc genhtml_legend=1 00:16:56.011 --rc geninfo_all_blocks=1 00:16:56.011 --rc geninfo_unexecuted_blocks=1 00:16:56.012 00:16:56.012 ' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.012 --rc genhtml_branch_coverage=1 00:16:56.012 --rc genhtml_function_coverage=1 00:16:56.012 --rc genhtml_legend=1 00:16:56.012 --rc geninfo_all_blocks=1 00:16:56.012 --rc geninfo_unexecuted_blocks=1 00:16:56.012 00:16:56.012 ' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:56.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.012 --rc genhtml_branch_coverage=1 00:16:56.012 --rc genhtml_function_coverage=1 00:16:56.012 --rc genhtml_legend=1 00:16:56.012 --rc geninfo_all_blocks=1 00:16:56.012 --rc geninfo_unexecuted_blocks=1 00:16:56.012 00:16:56.012 ' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.012 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.012 11:40:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:02.602 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:02.603 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:02.603 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:02.603 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:02.603 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.603 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.641 ms 00:17:02.604 00:17:02.604 --- 10.0.0.2 ping statistics --- 00:17:02.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.604 rtt min/avg/max/mdev = 0.641/0.641/0.641/0.000 ms 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:17:02.604 00:17:02.604 --- 10.0.0.1 ping statistics --- 00:17:02.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.604 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1045210 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1045210 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1045210 ']' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:02.604 11:40:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1045271 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6e414cae715db57f038e6ce7a39d332a330e68c9e9be62aa 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.J3I 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6e414cae715db57f038e6ce7a39d332a330e68c9e9be62aa 0 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6e414cae715db57f038e6ce7a39d332a330e68c9e9be62aa 0 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.177 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6e414cae715db57f038e6ce7a39d332a330e68c9e9be62aa 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.J3I 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.J3I 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.J3I 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=410eab990c0e0c8027ee597f5926c90d1fd7cda58588f4828bf01eeb77c3c56a 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7LP 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 410eab990c0e0c8027ee597f5926c90d1fd7cda58588f4828bf01eeb77c3c56a 3 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 410eab990c0e0c8027ee597f5926c90d1fd7cda58588f4828bf01eeb77c3c56a 3 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=410eab990c0e0c8027ee597f5926c90d1fd7cda58588f4828bf01eeb77c3c56a 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7LP 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7LP 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7LP 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=eec4868ad199f3f1461f60b2bbdef71b 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dcu 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key eec4868ad199f3f1461f60b2bbdef71b 1 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 eec4868ad199f3f1461f60b2bbdef71b 1 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=eec4868ad199f3f1461f60b2bbdef71b 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:03.178 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dcu 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dcu 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dcu 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2117f9a4478b6c2cfbbeac1536eb857c41dfc523ff9f1507 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.LcL 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2117f9a4478b6c2cfbbeac1536eb857c41dfc523ff9f1507 2 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2117f9a4478b6c2cfbbeac1536eb857c41dfc523ff9f1507 2 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2117f9a4478b6c2cfbbeac1536eb857c41dfc523ff9f1507 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.LcL 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.LcL 00:17:03.439 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.LcL 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b5b4e798664852a151dde48d8529f1511bcc29ac6835b0f 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.m21 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b5b4e798664852a151dde48d8529f1511bcc29ac6835b0f 2 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b5b4e798664852a151dde48d8529f1511bcc29ac6835b0f 2 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b5b4e798664852a151dde48d8529f1511bcc29ac6835b0f 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.m21 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.m21 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.m21 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5b7b04c75e9fa12a4e003a07e8b2a2ad 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.dCm 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5b7b04c75e9fa12a4e003a07e8b2a2ad 1 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5b7b04c75e9fa12a4e003a07e8b2a2ad 1 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5b7b04c75e9fa12a4e003a07e8b2a2ad 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.dCm 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.dCm 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.dCm 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:03.440 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f9be0f8b6d1e5d78eb1b63f2378618aae3ffc2f04f30b78d8d7372f202839b8d 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SFx 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f9be0f8b6d1e5d78eb1b63f2378618aae3ffc2f04f30b78d8d7372f202839b8d 3 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f9be0f8b6d1e5d78eb1b63f2378618aae3ffc2f04f30b78d8d7372f202839b8d 3 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f9be0f8b6d1e5d78eb1b63f2378618aae3ffc2f04f30b78d8d7372f202839b8d 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SFx 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SFx 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.SFx 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1045210 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1045210 ']' 00:17:03.701 11:40:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:03.701 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:03.701 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:03.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:03.701 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1045271 /var/tmp/host.sock 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1045271 ']' 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:03.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:03.702 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J3I 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.J3I 00:17:03.964 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.J3I 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7LP ]] 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7LP 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7LP 00:17:04.226 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7LP 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dcu 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dcu 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dcu 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.LcL ]] 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LcL 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LcL 00:17:04.487 11:40:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LcL 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.m21 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.m21 00:17:04.748 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.m21 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.dCm ]] 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dCm 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dCm 00:17:05.010 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dCm 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SFx 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.SFx 00:17:05.271 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.SFx 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.532 11:40:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:05.794 00:17:05.794 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.794 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.794 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.055 { 00:17:06.055 "cntlid": 1, 00:17:06.055 "qid": 0, 00:17:06.055 "state": "enabled", 00:17:06.055 "thread": "nvmf_tgt_poll_group_000", 00:17:06.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:06.055 "listen_address": { 00:17:06.055 "trtype": "TCP", 00:17:06.055 "adrfam": "IPv4", 00:17:06.055 "traddr": "10.0.0.2", 00:17:06.055 "trsvcid": "4420" 00:17:06.055 }, 00:17:06.055 "peer_address": { 00:17:06.055 "trtype": "TCP", 00:17:06.055 "adrfam": "IPv4", 00:17:06.055 "traddr": "10.0.0.1", 00:17:06.055 "trsvcid": "44390" 00:17:06.055 }, 00:17:06.055 "auth": { 00:17:06.055 "state": "completed", 00:17:06.055 "digest": "sha256", 00:17:06.055 "dhgroup": "null" 00:17:06.055 } 00:17:06.055 } 00:17:06.055 ]' 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:06.055 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.317 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.317 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.317 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.317 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:06.317 11:40:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.259 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:07.519 00:17:07.519 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.519 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.520 11:40:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.780 { 00:17:07.780 "cntlid": 3, 00:17:07.780 "qid": 0, 00:17:07.780 "state": "enabled", 00:17:07.780 "thread": "nvmf_tgt_poll_group_000", 00:17:07.780 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.780 "listen_address": { 00:17:07.780 "trtype": "TCP", 00:17:07.780 "adrfam": "IPv4", 00:17:07.780 "traddr": "10.0.0.2", 00:17:07.780 "trsvcid": "4420" 00:17:07.780 }, 00:17:07.780 "peer_address": { 00:17:07.780 "trtype": "TCP", 00:17:07.780 "adrfam": "IPv4", 00:17:07.780 "traddr": "10.0.0.1", 00:17:07.780 "trsvcid": "42590" 00:17:07.780 }, 00:17:07.780 "auth": { 00:17:07.780 "state": "completed", 00:17:07.780 "digest": "sha256", 00:17:07.780 "dhgroup": "null" 00:17:07.780 } 00:17:07.780 } 00:17:07.780 ]' 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.780 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.040 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:08.040 11:40:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:08.611 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.873 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.134 00:17:09.134 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.134 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.134 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.394 { 00:17:09.394 "cntlid": 5, 00:17:09.394 "qid": 0, 00:17:09.394 "state": "enabled", 00:17:09.394 "thread": "nvmf_tgt_poll_group_000", 00:17:09.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:09.394 "listen_address": { 00:17:09.394 "trtype": "TCP", 00:17:09.394 "adrfam": "IPv4", 00:17:09.394 "traddr": "10.0.0.2", 00:17:09.394 "trsvcid": "4420" 00:17:09.394 }, 00:17:09.394 "peer_address": { 00:17:09.394 "trtype": "TCP", 00:17:09.394 "adrfam": "IPv4", 00:17:09.394 "traddr": "10.0.0.1", 00:17:09.394 "trsvcid": "42622" 00:17:09.394 }, 00:17:09.394 "auth": { 00:17:09.394 "state": "completed", 00:17:09.394 "digest": "sha256", 00:17:09.394 "dhgroup": "null" 00:17:09.394 } 00:17:09.394 } 00:17:09.394 ]' 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.394 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.653 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:09.653 11:40:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.224 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.484 11:40:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:10.744 00:17:10.744 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.744 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.744 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.006 { 00:17:11.006 "cntlid": 7, 00:17:11.006 "qid": 0, 00:17:11.006 "state": "enabled", 00:17:11.006 "thread": "nvmf_tgt_poll_group_000", 00:17:11.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.006 "listen_address": { 00:17:11.006 "trtype": "TCP", 00:17:11.006 "adrfam": "IPv4", 00:17:11.006 "traddr": "10.0.0.2", 00:17:11.006 "trsvcid": "4420" 00:17:11.006 }, 00:17:11.006 "peer_address": { 00:17:11.006 "trtype": "TCP", 00:17:11.006 "adrfam": "IPv4", 00:17:11.006 "traddr": "10.0.0.1", 00:17:11.006 "trsvcid": "42646" 00:17:11.006 }, 00:17:11.006 "auth": { 00:17:11.006 "state": "completed", 00:17:11.006 "digest": "sha256", 00:17:11.006 "dhgroup": "null" 00:17:11.006 } 00:17:11.006 } 00:17:11.006 ]' 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.006 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.266 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:11.266 11:40:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:11.838 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.838 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.838 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.838 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.838 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.838 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.839 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.839 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:11.839 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.099 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.361 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.361 { 00:17:12.361 "cntlid": 9, 00:17:12.361 "qid": 0, 00:17:12.361 "state": "enabled", 00:17:12.361 "thread": "nvmf_tgt_poll_group_000", 00:17:12.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:12.361 "listen_address": { 00:17:12.361 "trtype": "TCP", 00:17:12.361 "adrfam": "IPv4", 00:17:12.361 "traddr": "10.0.0.2", 00:17:12.361 "trsvcid": "4420" 00:17:12.361 }, 00:17:12.361 "peer_address": { 00:17:12.361 "trtype": "TCP", 00:17:12.361 "adrfam": "IPv4", 00:17:12.361 "traddr": "10.0.0.1", 00:17:12.361 "trsvcid": "42662" 00:17:12.361 }, 00:17:12.361 "auth": { 00:17:12.361 "state": "completed", 00:17:12.361 "digest": "sha256", 00:17:12.361 "dhgroup": "ffdhe2048" 00:17:12.361 } 00:17:12.361 } 00:17:12.361 ]' 00:17:12.361 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.622 11:40:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.884 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:12.884 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.456 11:40:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.717 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.978 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.978 { 00:17:13.978 "cntlid": 11, 00:17:13.978 "qid": 0, 00:17:13.978 "state": "enabled", 00:17:13.978 "thread": "nvmf_tgt_poll_group_000", 00:17:13.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.978 "listen_address": { 00:17:13.978 "trtype": "TCP", 00:17:13.978 "adrfam": "IPv4", 00:17:13.978 "traddr": "10.0.0.2", 00:17:13.978 "trsvcid": "4420" 00:17:13.978 }, 00:17:13.978 "peer_address": { 00:17:13.978 "trtype": "TCP", 00:17:13.978 "adrfam": "IPv4", 00:17:13.978 "traddr": "10.0.0.1", 00:17:13.978 "trsvcid": "42690" 00:17:13.978 }, 00:17:13.978 "auth": { 00:17:13.978 "state": "completed", 00:17:13.978 "digest": "sha256", 00:17:13.978 "dhgroup": "ffdhe2048" 00:17:13.978 } 00:17:13.978 } 00:17:13.978 ]' 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:13.978 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:14.265 11:40:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.299 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.572 00:17:15.572 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.572 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.572 11:40:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.572 { 00:17:15.572 "cntlid": 13, 00:17:15.572 "qid": 0, 00:17:15.572 "state": "enabled", 00:17:15.572 "thread": "nvmf_tgt_poll_group_000", 00:17:15.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.572 "listen_address": { 00:17:15.572 "trtype": "TCP", 00:17:15.572 "adrfam": "IPv4", 00:17:15.572 "traddr": "10.0.0.2", 00:17:15.572 "trsvcid": "4420" 00:17:15.572 }, 00:17:15.572 "peer_address": { 00:17:15.572 "trtype": "TCP", 00:17:15.572 "adrfam": "IPv4", 00:17:15.572 "traddr": "10.0.0.1", 00:17:15.572 "trsvcid": "42710" 00:17:15.572 }, 00:17:15.572 "auth": { 00:17:15.572 "state": "completed", 00:17:15.572 "digest": "sha256", 00:17:15.572 "dhgroup": "ffdhe2048" 00:17:15.572 } 00:17:15.572 } 00:17:15.572 ]' 00:17:15.572 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.833 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.094 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:16.094 11:40:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:16.665 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.666 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.927 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:17.189 00:17:17.189 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.189 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.189 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.450 { 00:17:17.450 "cntlid": 15, 00:17:17.450 "qid": 0, 00:17:17.450 "state": "enabled", 00:17:17.450 "thread": "nvmf_tgt_poll_group_000", 00:17:17.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:17.450 "listen_address": { 00:17:17.450 "trtype": "TCP", 00:17:17.450 "adrfam": "IPv4", 00:17:17.450 "traddr": "10.0.0.2", 00:17:17.450 "trsvcid": "4420" 00:17:17.450 }, 00:17:17.450 "peer_address": { 00:17:17.450 "trtype": "TCP", 00:17:17.450 "adrfam": "IPv4", 00:17:17.450 "traddr": "10.0.0.1", 00:17:17.450 "trsvcid": "42738" 00:17:17.450 }, 00:17:17.450 "auth": { 00:17:17.450 "state": "completed", 00:17:17.450 "digest": "sha256", 00:17:17.450 "dhgroup": "ffdhe2048" 00:17:17.450 } 00:17:17.450 } 00:17:17.450 ]' 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.450 11:40:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.712 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:17.712 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:18.284 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.544 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.545 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.545 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.545 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.545 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.545 11:40:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.806 00:17:18.806 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.806 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.806 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.067 { 00:17:19.067 "cntlid": 17, 00:17:19.067 "qid": 0, 00:17:19.067 "state": "enabled", 00:17:19.067 "thread": "nvmf_tgt_poll_group_000", 00:17:19.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:19.067 "listen_address": { 00:17:19.067 "trtype": "TCP", 00:17:19.067 "adrfam": "IPv4", 00:17:19.067 "traddr": "10.0.0.2", 00:17:19.067 "trsvcid": "4420" 00:17:19.067 }, 00:17:19.067 "peer_address": { 00:17:19.067 "trtype": "TCP", 00:17:19.067 "adrfam": "IPv4", 00:17:19.067 "traddr": "10.0.0.1", 00:17:19.067 "trsvcid": "49172" 00:17:19.067 }, 00:17:19.067 "auth": { 00:17:19.067 "state": "completed", 00:17:19.067 "digest": "sha256", 00:17:19.067 "dhgroup": "ffdhe3072" 00:17:19.067 } 00:17:19.067 } 00:17:19.067 ]' 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.067 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.328 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.328 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.328 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.328 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:19.328 11:40:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.271 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.532 00:17:20.532 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.532 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.532 11:40:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.793 { 00:17:20.793 "cntlid": 19, 00:17:20.793 "qid": 0, 00:17:20.793 "state": "enabled", 00:17:20.793 "thread": "nvmf_tgt_poll_group_000", 00:17:20.793 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.793 "listen_address": { 00:17:20.793 "trtype": "TCP", 00:17:20.793 "adrfam": "IPv4", 00:17:20.793 "traddr": "10.0.0.2", 00:17:20.793 "trsvcid": "4420" 00:17:20.793 }, 00:17:20.793 "peer_address": { 00:17:20.793 "trtype": "TCP", 00:17:20.793 "adrfam": "IPv4", 00:17:20.793 "traddr": "10.0.0.1", 00:17:20.793 "trsvcid": "49198" 00:17:20.793 }, 00:17:20.793 "auth": { 00:17:20.793 "state": "completed", 00:17:20.793 "digest": "sha256", 00:17:20.793 "dhgroup": "ffdhe3072" 00:17:20.793 } 00:17:20.793 } 00:17:20.793 ]' 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.793 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.055 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:21.055 11:40:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.629 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:21.629 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.889 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:22.150 00:17:22.150 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:22.150 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.150 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.411 { 00:17:22.411 "cntlid": 21, 00:17:22.411 "qid": 0, 00:17:22.411 "state": "enabled", 00:17:22.411 "thread": "nvmf_tgt_poll_group_000", 00:17:22.411 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.411 "listen_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.2", 00:17:22.411 "trsvcid": "4420" 00:17:22.411 }, 00:17:22.411 "peer_address": { 00:17:22.411 "trtype": "TCP", 00:17:22.411 "adrfam": "IPv4", 00:17:22.411 "traddr": "10.0.0.1", 00:17:22.411 "trsvcid": "49224" 00:17:22.411 }, 00:17:22.411 "auth": { 00:17:22.411 "state": "completed", 00:17:22.411 "digest": "sha256", 00:17:22.411 "dhgroup": "ffdhe3072" 00:17:22.411 } 00:17:22.411 } 00:17:22.411 ]' 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.411 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.672 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:22.672 11:40:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.244 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:23.244 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.504 11:40:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.765 00:17:23.765 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.765 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.765 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.026 { 00:17:24.026 "cntlid": 23, 00:17:24.026 "qid": 0, 00:17:24.026 "state": "enabled", 00:17:24.026 "thread": "nvmf_tgt_poll_group_000", 00:17:24.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:24.026 "listen_address": { 00:17:24.026 "trtype": "TCP", 00:17:24.026 "adrfam": "IPv4", 00:17:24.026 "traddr": "10.0.0.2", 00:17:24.026 "trsvcid": "4420" 00:17:24.026 }, 00:17:24.026 "peer_address": { 00:17:24.026 "trtype": "TCP", 00:17:24.026 "adrfam": "IPv4", 00:17:24.026 "traddr": "10.0.0.1", 00:17:24.026 "trsvcid": "49262" 00:17:24.026 }, 00:17:24.026 "auth": { 00:17:24.026 "state": "completed", 00:17:24.026 "digest": "sha256", 00:17:24.026 "dhgroup": "ffdhe3072" 00:17:24.026 } 00:17:24.026 } 00:17:24.026 ]' 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.026 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.286 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:24.286 11:40:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:24.856 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.117 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.378 00:17:25.378 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.378 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.378 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.639 { 00:17:25.639 "cntlid": 25, 00:17:25.639 "qid": 0, 00:17:25.639 "state": "enabled", 00:17:25.639 "thread": "nvmf_tgt_poll_group_000", 00:17:25.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.639 "listen_address": { 00:17:25.639 "trtype": "TCP", 00:17:25.639 "adrfam": "IPv4", 00:17:25.639 "traddr": "10.0.0.2", 00:17:25.639 "trsvcid": "4420" 00:17:25.639 }, 00:17:25.639 "peer_address": { 00:17:25.639 "trtype": "TCP", 00:17:25.639 "adrfam": "IPv4", 00:17:25.639 "traddr": "10.0.0.1", 00:17:25.639 "trsvcid": "49282" 00:17:25.639 }, 00:17:25.639 "auth": { 00:17:25.639 "state": "completed", 00:17:25.639 "digest": "sha256", 00:17:25.639 "dhgroup": "ffdhe4096" 00:17:25.639 } 00:17:25.639 } 00:17:25.639 ]' 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:25.639 11:40:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.639 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.639 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.639 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.639 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.639 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.901 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:25.901 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:26.472 11:40:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.732 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.733 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.994 00:17:26.994 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.994 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.994 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.255 { 00:17:27.255 "cntlid": 27, 00:17:27.255 "qid": 0, 00:17:27.255 "state": "enabled", 00:17:27.255 "thread": "nvmf_tgt_poll_group_000", 00:17:27.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.255 "listen_address": { 00:17:27.255 "trtype": "TCP", 00:17:27.255 "adrfam": "IPv4", 00:17:27.255 "traddr": "10.0.0.2", 00:17:27.255 "trsvcid": "4420" 00:17:27.255 }, 00:17:27.255 "peer_address": { 00:17:27.255 "trtype": "TCP", 00:17:27.255 "adrfam": "IPv4", 00:17:27.255 "traddr": "10.0.0.1", 00:17:27.255 "trsvcid": "49308" 00:17:27.255 }, 00:17:27.255 "auth": { 00:17:27.255 "state": "completed", 00:17:27.255 "digest": "sha256", 00:17:27.255 "dhgroup": "ffdhe4096" 00:17:27.255 } 00:17:27.255 } 00:17:27.255 ]' 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.255 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.516 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:27.516 11:40:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:28.086 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.346 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.346 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.346 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.346 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.346 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.346 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.347 11:40:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.607 00:17:28.607 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.607 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.607 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.868 { 00:17:28.868 "cntlid": 29, 00:17:28.868 "qid": 0, 00:17:28.868 "state": "enabled", 00:17:28.868 "thread": "nvmf_tgt_poll_group_000", 00:17:28.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.868 "listen_address": { 00:17:28.868 "trtype": "TCP", 00:17:28.868 "adrfam": "IPv4", 00:17:28.868 "traddr": "10.0.0.2", 00:17:28.868 "trsvcid": "4420" 00:17:28.868 }, 00:17:28.868 "peer_address": { 00:17:28.868 "trtype": "TCP", 00:17:28.868 "adrfam": "IPv4", 00:17:28.868 "traddr": "10.0.0.1", 00:17:28.868 "trsvcid": "56614" 00:17:28.868 }, 00:17:28.868 "auth": { 00:17:28.868 "state": "completed", 00:17:28.868 "digest": "sha256", 00:17:28.868 "dhgroup": "ffdhe4096" 00:17:28.868 } 00:17:28.868 } 00:17:28.868 ]' 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:28.868 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.129 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.129 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.129 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.129 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:29.129 11:40:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.075 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.336 00:17:30.336 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.336 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.336 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.597 { 00:17:30.597 "cntlid": 31, 00:17:30.597 "qid": 0, 00:17:30.597 "state": "enabled", 00:17:30.597 "thread": "nvmf_tgt_poll_group_000", 00:17:30.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.597 "listen_address": { 00:17:30.597 "trtype": "TCP", 00:17:30.597 "adrfam": "IPv4", 00:17:30.597 "traddr": "10.0.0.2", 00:17:30.597 "trsvcid": "4420" 00:17:30.597 }, 00:17:30.597 "peer_address": { 00:17:30.597 "trtype": "TCP", 00:17:30.597 "adrfam": "IPv4", 00:17:30.597 "traddr": "10.0.0.1", 00:17:30.597 "trsvcid": "56640" 00:17:30.597 }, 00:17:30.597 "auth": { 00:17:30.597 "state": "completed", 00:17:30.597 "digest": "sha256", 00:17:30.597 "dhgroup": "ffdhe4096" 00:17:30.597 } 00:17:30.597 } 00:17:30.597 ]' 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:30.597 11:40:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.597 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.597 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.597 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.857 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:30.857 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.429 11:40:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.690 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.951 00:17:31.951 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.951 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.951 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.212 { 00:17:32.212 "cntlid": 33, 00:17:32.212 "qid": 0, 00:17:32.212 "state": "enabled", 00:17:32.212 "thread": "nvmf_tgt_poll_group_000", 00:17:32.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.212 "listen_address": { 00:17:32.212 "trtype": "TCP", 00:17:32.212 "adrfam": "IPv4", 00:17:32.212 "traddr": "10.0.0.2", 00:17:32.212 "trsvcid": "4420" 00:17:32.212 }, 00:17:32.212 "peer_address": { 00:17:32.212 "trtype": "TCP", 00:17:32.212 "adrfam": "IPv4", 00:17:32.212 "traddr": "10.0.0.1", 00:17:32.212 "trsvcid": "56672" 00:17:32.212 }, 00:17:32.212 "auth": { 00:17:32.212 "state": "completed", 00:17:32.212 "digest": "sha256", 00:17:32.212 "dhgroup": "ffdhe6144" 00:17:32.212 } 00:17:32.212 } 00:17:32.212 ]' 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.212 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.473 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:32.473 11:40:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:33.044 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.307 11:40:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.568 00:17:33.568 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.830 { 00:17:33.830 "cntlid": 35, 00:17:33.830 "qid": 0, 00:17:33.830 "state": "enabled", 00:17:33.830 "thread": "nvmf_tgt_poll_group_000", 00:17:33.830 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:33.830 "listen_address": { 00:17:33.830 "trtype": "TCP", 00:17:33.830 "adrfam": "IPv4", 00:17:33.830 "traddr": "10.0.0.2", 00:17:33.830 "trsvcid": "4420" 00:17:33.830 }, 00:17:33.830 "peer_address": { 00:17:33.830 "trtype": "TCP", 00:17:33.830 "adrfam": "IPv4", 00:17:33.830 "traddr": "10.0.0.1", 00:17:33.830 "trsvcid": "56706" 00:17:33.830 }, 00:17:33.830 "auth": { 00:17:33.830 "state": "completed", 00:17:33.830 "digest": "sha256", 00:17:33.830 "dhgroup": "ffdhe6144" 00:17:33.830 } 00:17:33.830 } 00:17:33.830 ]' 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.830 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:34.091 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.091 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.091 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.091 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.092 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:34.092 11:40:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.034 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.034 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.295 00:17:35.296 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.296 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.296 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.556 { 00:17:35.556 "cntlid": 37, 00:17:35.556 "qid": 0, 00:17:35.556 "state": "enabled", 00:17:35.556 "thread": "nvmf_tgt_poll_group_000", 00:17:35.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.556 "listen_address": { 00:17:35.556 "trtype": "TCP", 00:17:35.556 "adrfam": "IPv4", 00:17:35.556 "traddr": "10.0.0.2", 00:17:35.556 "trsvcid": "4420" 00:17:35.556 }, 00:17:35.556 "peer_address": { 00:17:35.556 "trtype": "TCP", 00:17:35.556 "adrfam": "IPv4", 00:17:35.556 "traddr": "10.0.0.1", 00:17:35.556 "trsvcid": "56732" 00:17:35.556 }, 00:17:35.556 "auth": { 00:17:35.556 "state": "completed", 00:17:35.556 "digest": "sha256", 00:17:35.556 "dhgroup": "ffdhe6144" 00:17:35.556 } 00:17:35.556 } 00:17:35.556 ]' 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:35.556 11:41:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.556 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:35.556 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.817 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.817 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.817 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.817 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:35.817 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:36.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:36.760 11:41:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:36.760 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.020 00:17:37.020 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.020 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.020 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.281 { 00:17:37.281 "cntlid": 39, 00:17:37.281 "qid": 0, 00:17:37.281 "state": "enabled", 00:17:37.281 "thread": "nvmf_tgt_poll_group_000", 00:17:37.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.281 "listen_address": { 00:17:37.281 "trtype": "TCP", 00:17:37.281 "adrfam": "IPv4", 00:17:37.281 "traddr": "10.0.0.2", 00:17:37.281 "trsvcid": "4420" 00:17:37.281 }, 00:17:37.281 "peer_address": { 00:17:37.281 "trtype": "TCP", 00:17:37.281 "adrfam": "IPv4", 00:17:37.281 "traddr": "10.0.0.1", 00:17:37.281 "trsvcid": "56756" 00:17:37.281 }, 00:17:37.281 "auth": { 00:17:37.281 "state": "completed", 00:17:37.281 "digest": "sha256", 00:17:37.281 "dhgroup": "ffdhe6144" 00:17:37.281 } 00:17:37.281 } 00:17:37.281 ]' 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:37.281 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:37.543 11:41:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.485 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.485 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.486 11:41:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.057 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.057 { 00:17:39.057 "cntlid": 41, 00:17:39.057 "qid": 0, 00:17:39.057 "state": "enabled", 00:17:39.057 "thread": "nvmf_tgt_poll_group_000", 00:17:39.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.057 "listen_address": { 00:17:39.057 "trtype": "TCP", 00:17:39.057 "adrfam": "IPv4", 00:17:39.057 "traddr": "10.0.0.2", 00:17:39.057 "trsvcid": "4420" 00:17:39.057 }, 00:17:39.057 "peer_address": { 00:17:39.057 "trtype": "TCP", 00:17:39.057 "adrfam": "IPv4", 00:17:39.057 "traddr": "10.0.0.1", 00:17:39.057 "trsvcid": "44960" 00:17:39.057 }, 00:17:39.057 "auth": { 00:17:39.057 "state": "completed", 00:17:39.057 "digest": "sha256", 00:17:39.057 "dhgroup": "ffdhe8192" 00:17:39.057 } 00:17:39.057 } 00:17:39.057 ]' 00:17:39.057 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.318 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.577 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:39.577 11:41:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:40.148 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.408 11:41:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.979 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.979 { 00:17:40.979 "cntlid": 43, 00:17:40.979 "qid": 0, 00:17:40.979 "state": "enabled", 00:17:40.979 "thread": "nvmf_tgt_poll_group_000", 00:17:40.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:40.979 "listen_address": { 00:17:40.979 "trtype": "TCP", 00:17:40.979 "adrfam": "IPv4", 00:17:40.979 "traddr": "10.0.0.2", 00:17:40.979 "trsvcid": "4420" 00:17:40.979 }, 00:17:40.979 "peer_address": { 00:17:40.979 "trtype": "TCP", 00:17:40.979 "adrfam": "IPv4", 00:17:40.979 "traddr": "10.0.0.1", 00:17:40.979 "trsvcid": "44986" 00:17:40.979 }, 00:17:40.979 "auth": { 00:17:40.979 "state": "completed", 00:17:40.979 "digest": "sha256", 00:17:40.979 "dhgroup": "ffdhe8192" 00:17:40.979 } 00:17:40.979 } 00:17:40.979 ]' 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:40.979 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.239 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:41.239 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.239 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.239 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.240 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.240 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:41.240 11:41:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.183 11:41:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.755 00:17:42.755 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.755 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.755 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.015 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.015 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.015 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.015 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.015 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.015 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.015 { 00:17:43.015 "cntlid": 45, 00:17:43.015 "qid": 0, 00:17:43.015 "state": "enabled", 00:17:43.015 "thread": "nvmf_tgt_poll_group_000", 00:17:43.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.015 "listen_address": { 00:17:43.015 "trtype": "TCP", 00:17:43.015 "adrfam": "IPv4", 00:17:43.015 "traddr": "10.0.0.2", 00:17:43.015 "trsvcid": "4420" 00:17:43.015 }, 00:17:43.015 "peer_address": { 00:17:43.015 "trtype": "TCP", 00:17:43.015 "adrfam": "IPv4", 00:17:43.015 "traddr": "10.0.0.1", 00:17:43.015 "trsvcid": "45014" 00:17:43.015 }, 00:17:43.015 "auth": { 00:17:43.015 "state": "completed", 00:17:43.015 "digest": "sha256", 00:17:43.015 "dhgroup": "ffdhe8192" 00:17:43.016 } 00:17:43.016 } 00:17:43.016 ]' 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.016 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.277 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:43.277 11:41:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:43.849 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.110 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.681 00:17:44.681 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.681 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.681 11:41:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.681 { 00:17:44.681 "cntlid": 47, 00:17:44.681 "qid": 0, 00:17:44.681 "state": "enabled", 00:17:44.681 "thread": "nvmf_tgt_poll_group_000", 00:17:44.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.681 "listen_address": { 00:17:44.681 "trtype": "TCP", 00:17:44.681 "adrfam": "IPv4", 00:17:44.681 "traddr": "10.0.0.2", 00:17:44.681 "trsvcid": "4420" 00:17:44.681 }, 00:17:44.681 "peer_address": { 00:17:44.681 "trtype": "TCP", 00:17:44.681 "adrfam": "IPv4", 00:17:44.681 "traddr": "10.0.0.1", 00:17:44.681 "trsvcid": "45046" 00:17:44.681 }, 00:17:44.681 "auth": { 00:17:44.681 "state": "completed", 00:17:44.681 "digest": "sha256", 00:17:44.681 "dhgroup": "ffdhe8192" 00:17:44.681 } 00:17:44.681 } 00:17:44.681 ]' 00:17:44.681 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.942 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.202 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:45.202 11:41:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:45.773 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.035 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.295 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.295 { 00:17:46.295 "cntlid": 49, 00:17:46.295 "qid": 0, 00:17:46.295 "state": "enabled", 00:17:46.295 "thread": "nvmf_tgt_poll_group_000", 00:17:46.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:46.295 "listen_address": { 00:17:46.295 "trtype": "TCP", 00:17:46.295 "adrfam": "IPv4", 00:17:46.295 "traddr": "10.0.0.2", 00:17:46.295 "trsvcid": "4420" 00:17:46.295 }, 00:17:46.295 "peer_address": { 00:17:46.295 "trtype": "TCP", 00:17:46.295 "adrfam": "IPv4", 00:17:46.295 "traddr": "10.0.0.1", 00:17:46.295 "trsvcid": "45080" 00:17:46.295 }, 00:17:46.295 "auth": { 00:17:46.295 "state": "completed", 00:17:46.295 "digest": "sha384", 00:17:46.295 "dhgroup": "null" 00:17:46.295 } 00:17:46.295 } 00:17:46.295 ]' 00:17:46.295 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.556 11:41:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.817 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:46.817 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:47.388 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.648 11:41:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.908 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.908 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:48.169 { 00:17:48.169 "cntlid": 51, 00:17:48.169 "qid": 0, 00:17:48.169 "state": "enabled", 00:17:48.169 "thread": "nvmf_tgt_poll_group_000", 00:17:48.169 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:48.169 "listen_address": { 00:17:48.169 "trtype": "TCP", 00:17:48.169 "adrfam": "IPv4", 00:17:48.169 "traddr": "10.0.0.2", 00:17:48.169 "trsvcid": "4420" 00:17:48.169 }, 00:17:48.169 "peer_address": { 00:17:48.169 "trtype": "TCP", 00:17:48.169 "adrfam": "IPv4", 00:17:48.169 "traddr": "10.0.0.1", 00:17:48.169 "trsvcid": "36348" 00:17:48.169 }, 00:17:48.169 "auth": { 00:17:48.169 "state": "completed", 00:17:48.169 "digest": "sha384", 00:17:48.169 "dhgroup": "null" 00:17:48.169 } 00:17:48.169 } 00:17:48.169 ]' 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.169 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.170 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.431 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:48.431 11:41:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.004 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:49.004 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.264 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.525 00:17:49.525 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.525 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.525 11:41:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.525 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.785 { 00:17:49.785 "cntlid": 53, 00:17:49.785 "qid": 0, 00:17:49.785 "state": "enabled", 00:17:49.785 "thread": "nvmf_tgt_poll_group_000", 00:17:49.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.785 "listen_address": { 00:17:49.785 "trtype": "TCP", 00:17:49.785 "adrfam": "IPv4", 00:17:49.785 "traddr": "10.0.0.2", 00:17:49.785 "trsvcid": "4420" 00:17:49.785 }, 00:17:49.785 "peer_address": { 00:17:49.785 "trtype": "TCP", 00:17:49.785 "adrfam": "IPv4", 00:17:49.785 "traddr": "10.0.0.1", 00:17:49.785 "trsvcid": "36360" 00:17:49.785 }, 00:17:49.785 "auth": { 00:17:49.785 "state": "completed", 00:17:49.785 "digest": "sha384", 00:17:49.785 "dhgroup": "null" 00:17:49.785 } 00:17:49.785 } 00:17:49.785 ]' 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:49.785 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.786 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.786 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.786 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.046 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:50.046 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:50.616 11:41:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:50.616 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:50.876 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:50.876 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.876 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.876 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:50.876 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:50.876 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:50.877 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:51.137 00:17:51.138 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:51.138 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:51.138 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.399 { 00:17:51.399 "cntlid": 55, 00:17:51.399 "qid": 0, 00:17:51.399 "state": "enabled", 00:17:51.399 "thread": "nvmf_tgt_poll_group_000", 00:17:51.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.399 "listen_address": { 00:17:51.399 "trtype": "TCP", 00:17:51.399 "adrfam": "IPv4", 00:17:51.399 "traddr": "10.0.0.2", 00:17:51.399 "trsvcid": "4420" 00:17:51.399 }, 00:17:51.399 "peer_address": { 00:17:51.399 "trtype": "TCP", 00:17:51.399 "adrfam": "IPv4", 00:17:51.399 "traddr": "10.0.0.1", 00:17:51.399 "trsvcid": "36378" 00:17:51.399 }, 00:17:51.399 "auth": { 00:17:51.399 "state": "completed", 00:17:51.399 "digest": "sha384", 00:17:51.399 "dhgroup": "null" 00:17:51.399 } 00:17:51.399 } 00:17:51.399 ]' 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.399 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.659 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:51.659 11:41:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.230 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.490 11:41:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.749 00:17:52.749 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.749 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.749 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.009 { 00:17:53.009 "cntlid": 57, 00:17:53.009 "qid": 0, 00:17:53.009 "state": "enabled", 00:17:53.009 "thread": "nvmf_tgt_poll_group_000", 00:17:53.009 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:53.009 "listen_address": { 00:17:53.009 "trtype": "TCP", 00:17:53.009 "adrfam": "IPv4", 00:17:53.009 "traddr": "10.0.0.2", 00:17:53.009 "trsvcid": "4420" 00:17:53.009 }, 00:17:53.009 "peer_address": { 00:17:53.009 "trtype": "TCP", 00:17:53.009 "adrfam": "IPv4", 00:17:53.009 "traddr": "10.0.0.1", 00:17:53.009 "trsvcid": "36416" 00:17:53.009 }, 00:17:53.009 "auth": { 00:17:53.009 "state": "completed", 00:17:53.009 "digest": "sha384", 00:17:53.009 "dhgroup": "ffdhe2048" 00:17:53.009 } 00:17:53.009 } 00:17:53.009 ]' 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.009 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.270 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:53.270 11:41:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:53.840 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.101 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.362 00:17:54.362 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.362 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.362 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.622 { 00:17:54.622 "cntlid": 59, 00:17:54.622 "qid": 0, 00:17:54.622 "state": "enabled", 00:17:54.622 "thread": "nvmf_tgt_poll_group_000", 00:17:54.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.622 "listen_address": { 00:17:54.622 "trtype": "TCP", 00:17:54.622 "adrfam": "IPv4", 00:17:54.622 "traddr": "10.0.0.2", 00:17:54.622 "trsvcid": "4420" 00:17:54.622 }, 00:17:54.622 "peer_address": { 00:17:54.622 "trtype": "TCP", 00:17:54.622 "adrfam": "IPv4", 00:17:54.622 "traddr": "10.0.0.1", 00:17:54.622 "trsvcid": "36440" 00:17:54.622 }, 00:17:54.622 "auth": { 00:17:54.622 "state": "completed", 00:17:54.622 "digest": "sha384", 00:17:54.622 "dhgroup": "ffdhe2048" 00:17:54.622 } 00:17:54.622 } 00:17:54.622 ]' 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.622 11:41:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.622 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:54.622 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.622 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.622 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.622 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.882 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:54.882 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.453 11:41:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.713 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.974 00:17:55.974 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.974 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.974 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.234 { 00:17:56.234 "cntlid": 61, 00:17:56.234 "qid": 0, 00:17:56.234 "state": "enabled", 00:17:56.234 "thread": "nvmf_tgt_poll_group_000", 00:17:56.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.234 "listen_address": { 00:17:56.234 "trtype": "TCP", 00:17:56.234 "adrfam": "IPv4", 00:17:56.234 "traddr": "10.0.0.2", 00:17:56.234 "trsvcid": "4420" 00:17:56.234 }, 00:17:56.234 "peer_address": { 00:17:56.234 "trtype": "TCP", 00:17:56.234 "adrfam": "IPv4", 00:17:56.234 "traddr": "10.0.0.1", 00:17:56.234 "trsvcid": "36452" 00:17:56.234 }, 00:17:56.234 "auth": { 00:17:56.234 "state": "completed", 00:17:56.234 "digest": "sha384", 00:17:56.234 "dhgroup": "ffdhe2048" 00:17:56.234 } 00:17:56.234 } 00:17:56.234 ]' 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.234 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.494 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:56.495 11:41:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:57.065 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.325 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:57.585 00:17:57.585 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.585 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.585 11:41:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.845 { 00:17:57.845 "cntlid": 63, 00:17:57.845 "qid": 0, 00:17:57.845 "state": "enabled", 00:17:57.845 "thread": "nvmf_tgt_poll_group_000", 00:17:57.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:57.845 "listen_address": { 00:17:57.845 "trtype": "TCP", 00:17:57.845 "adrfam": "IPv4", 00:17:57.845 "traddr": "10.0.0.2", 00:17:57.845 "trsvcid": "4420" 00:17:57.845 }, 00:17:57.845 "peer_address": { 00:17:57.845 "trtype": "TCP", 00:17:57.845 "adrfam": "IPv4", 00:17:57.845 "traddr": "10.0.0.1", 00:17:57.845 "trsvcid": "46806" 00:17:57.845 }, 00:17:57.845 "auth": { 00:17:57.845 "state": "completed", 00:17:57.845 "digest": "sha384", 00:17:57.845 "dhgroup": "ffdhe2048" 00:17:57.845 } 00:17:57.845 } 00:17:57.845 ]' 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.845 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.105 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:58.105 11:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:17:58.677 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.938 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.199 00:17:59.199 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.199 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.199 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.460 { 00:17:59.460 "cntlid": 65, 00:17:59.460 "qid": 0, 00:17:59.460 "state": "enabled", 00:17:59.460 "thread": "nvmf_tgt_poll_group_000", 00:17:59.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.460 "listen_address": { 00:17:59.460 "trtype": "TCP", 00:17:59.460 "adrfam": "IPv4", 00:17:59.460 "traddr": "10.0.0.2", 00:17:59.460 "trsvcid": "4420" 00:17:59.460 }, 00:17:59.460 "peer_address": { 00:17:59.460 "trtype": "TCP", 00:17:59.460 "adrfam": "IPv4", 00:17:59.460 "traddr": "10.0.0.1", 00:17:59.460 "trsvcid": "46852" 00:17:59.460 }, 00:17:59.460 "auth": { 00:17:59.460 "state": "completed", 00:17:59.460 "digest": "sha384", 00:17:59.460 "dhgroup": "ffdhe3072" 00:17:59.460 } 00:17:59.460 } 00:17:59.460 ]' 00:17:59.460 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.461 11:41:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.721 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:17:59.721 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:00.293 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.553 11:41:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:00.813 00:18:00.813 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:00.813 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:00.813 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.074 { 00:18:01.074 "cntlid": 67, 00:18:01.074 "qid": 0, 00:18:01.074 "state": "enabled", 00:18:01.074 "thread": "nvmf_tgt_poll_group_000", 00:18:01.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.074 "listen_address": { 00:18:01.074 "trtype": "TCP", 00:18:01.074 "adrfam": "IPv4", 00:18:01.074 "traddr": "10.0.0.2", 00:18:01.074 "trsvcid": "4420" 00:18:01.074 }, 00:18:01.074 "peer_address": { 00:18:01.074 "trtype": "TCP", 00:18:01.074 "adrfam": "IPv4", 00:18:01.074 "traddr": "10.0.0.1", 00:18:01.074 "trsvcid": "46874" 00:18:01.074 }, 00:18:01.074 "auth": { 00:18:01.074 "state": "completed", 00:18:01.074 "digest": "sha384", 00:18:01.074 "dhgroup": "ffdhe3072" 00:18:01.074 } 00:18:01.074 } 00:18:01.074 ]' 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:01.074 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.335 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.335 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.335 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.335 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:01.335 11:41:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.279 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.279 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:02.540 00:18:02.540 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.540 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.540 11:41:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:02.802 { 00:18:02.802 "cntlid": 69, 00:18:02.802 "qid": 0, 00:18:02.802 "state": "enabled", 00:18:02.802 "thread": "nvmf_tgt_poll_group_000", 00:18:02.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:02.802 "listen_address": { 00:18:02.802 "trtype": "TCP", 00:18:02.802 "adrfam": "IPv4", 00:18:02.802 "traddr": "10.0.0.2", 00:18:02.802 "trsvcid": "4420" 00:18:02.802 }, 00:18:02.802 "peer_address": { 00:18:02.802 "trtype": "TCP", 00:18:02.802 "adrfam": "IPv4", 00:18:02.802 "traddr": "10.0.0.1", 00:18:02.802 "trsvcid": "46912" 00:18:02.802 }, 00:18:02.802 "auth": { 00:18:02.802 "state": "completed", 00:18:02.802 "digest": "sha384", 00:18:02.802 "dhgroup": "ffdhe3072" 00:18:02.802 } 00:18:02.802 } 00:18:02.802 ]' 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.802 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.062 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:03.062 11:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.633 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:03.893 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:04.154 00:18:04.154 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.154 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.154 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.416 { 00:18:04.416 "cntlid": 71, 00:18:04.416 "qid": 0, 00:18:04.416 "state": "enabled", 00:18:04.416 "thread": "nvmf_tgt_poll_group_000", 00:18:04.416 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.416 "listen_address": { 00:18:04.416 "trtype": "TCP", 00:18:04.416 "adrfam": "IPv4", 00:18:04.416 "traddr": "10.0.0.2", 00:18:04.416 "trsvcid": "4420" 00:18:04.416 }, 00:18:04.416 "peer_address": { 00:18:04.416 "trtype": "TCP", 00:18:04.416 "adrfam": "IPv4", 00:18:04.416 "traddr": "10.0.0.1", 00:18:04.416 "trsvcid": "46934" 00:18:04.416 }, 00:18:04.416 "auth": { 00:18:04.416 "state": "completed", 00:18:04.416 "digest": "sha384", 00:18:04.416 "dhgroup": "ffdhe3072" 00:18:04.416 } 00:18:04.416 } 00:18:04.416 ]' 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.416 11:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:04.677 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:04.677 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.248 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.509 11:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:05.771 00:18:05.771 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:05.771 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:05.771 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.032 { 00:18:06.032 "cntlid": 73, 00:18:06.032 "qid": 0, 00:18:06.032 "state": "enabled", 00:18:06.032 "thread": "nvmf_tgt_poll_group_000", 00:18:06.032 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.032 "listen_address": { 00:18:06.032 "trtype": "TCP", 00:18:06.032 "adrfam": "IPv4", 00:18:06.032 "traddr": "10.0.0.2", 00:18:06.032 "trsvcid": "4420" 00:18:06.032 }, 00:18:06.032 "peer_address": { 00:18:06.032 "trtype": "TCP", 00:18:06.032 "adrfam": "IPv4", 00:18:06.032 "traddr": "10.0.0.1", 00:18:06.032 "trsvcid": "46972" 00:18:06.032 }, 00:18:06.032 "auth": { 00:18:06.032 "state": "completed", 00:18:06.032 "digest": "sha384", 00:18:06.032 "dhgroup": "ffdhe4096" 00:18:06.032 } 00:18:06.032 } 00:18:06.032 ]' 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.032 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.293 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:06.293 11:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:06.865 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.126 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:07.388 00:18:07.388 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.388 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.388 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.650 { 00:18:07.650 "cntlid": 75, 00:18:07.650 "qid": 0, 00:18:07.650 "state": "enabled", 00:18:07.650 "thread": "nvmf_tgt_poll_group_000", 00:18:07.650 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:07.650 "listen_address": { 00:18:07.650 "trtype": "TCP", 00:18:07.650 "adrfam": "IPv4", 00:18:07.650 "traddr": "10.0.0.2", 00:18:07.650 "trsvcid": "4420" 00:18:07.650 }, 00:18:07.650 "peer_address": { 00:18:07.650 "trtype": "TCP", 00:18:07.650 "adrfam": "IPv4", 00:18:07.650 "traddr": "10.0.0.1", 00:18:07.650 "trsvcid": "34890" 00:18:07.650 }, 00:18:07.650 "auth": { 00:18:07.650 "state": "completed", 00:18:07.650 "digest": "sha384", 00:18:07.650 "dhgroup": "ffdhe4096" 00:18:07.650 } 00:18:07.650 } 00:18:07.650 ]' 00:18:07.650 11:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.650 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.912 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:07.912 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:08.483 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.484 11:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:08.744 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:18:08.744 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.744 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:08.744 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:08.744 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:08.745 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:09.006 00:18:09.007 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.007 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.007 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.268 { 00:18:09.268 "cntlid": 77, 00:18:09.268 "qid": 0, 00:18:09.268 "state": "enabled", 00:18:09.268 "thread": "nvmf_tgt_poll_group_000", 00:18:09.268 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:09.268 "listen_address": { 00:18:09.268 "trtype": "TCP", 00:18:09.268 "adrfam": "IPv4", 00:18:09.268 "traddr": "10.0.0.2", 00:18:09.268 "trsvcid": "4420" 00:18:09.268 }, 00:18:09.268 "peer_address": { 00:18:09.268 "trtype": "TCP", 00:18:09.268 "adrfam": "IPv4", 00:18:09.268 "traddr": "10.0.0.1", 00:18:09.268 "trsvcid": "34912" 00:18:09.268 }, 00:18:09.268 "auth": { 00:18:09.268 "state": "completed", 00:18:09.268 "digest": "sha384", 00:18:09.268 "dhgroup": "ffdhe4096" 00:18:09.268 } 00:18:09.268 } 00:18:09.268 ]' 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.268 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.529 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:09.529 11:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:10.101 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.101 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.101 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:10.101 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.101 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.361 11:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:10.628 00:18:10.628 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:10.628 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:10.628 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.962 { 00:18:10.962 "cntlid": 79, 00:18:10.962 "qid": 0, 00:18:10.962 "state": "enabled", 00:18:10.962 "thread": "nvmf_tgt_poll_group_000", 00:18:10.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.962 "listen_address": { 00:18:10.962 "trtype": "TCP", 00:18:10.962 "adrfam": "IPv4", 00:18:10.962 "traddr": "10.0.0.2", 00:18:10.962 "trsvcid": "4420" 00:18:10.962 }, 00:18:10.962 "peer_address": { 00:18:10.962 "trtype": "TCP", 00:18:10.962 "adrfam": "IPv4", 00:18:10.962 "traddr": "10.0.0.1", 00:18:10.962 "trsvcid": "34944" 00:18:10.962 }, 00:18:10.962 "auth": { 00:18:10.962 "state": "completed", 00:18:10.962 "digest": "sha384", 00:18:10.962 "dhgroup": "ffdhe4096" 00:18:10.962 } 00:18:10.962 } 00:18:10.962 ]' 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.962 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.254 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:11.254 11:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:11.882 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.142 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.143 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.143 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:12.403 00:18:12.403 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.403 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.403 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.663 { 00:18:12.663 "cntlid": 81, 00:18:12.663 "qid": 0, 00:18:12.663 "state": "enabled", 00:18:12.663 "thread": "nvmf_tgt_poll_group_000", 00:18:12.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:12.663 "listen_address": { 00:18:12.663 "trtype": "TCP", 00:18:12.663 "adrfam": "IPv4", 00:18:12.663 "traddr": "10.0.0.2", 00:18:12.663 "trsvcid": "4420" 00:18:12.663 }, 00:18:12.663 "peer_address": { 00:18:12.663 "trtype": "TCP", 00:18:12.663 "adrfam": "IPv4", 00:18:12.663 "traddr": "10.0.0.1", 00:18:12.663 "trsvcid": "34978" 00:18:12.663 }, 00:18:12.663 "auth": { 00:18:12.663 "state": "completed", 00:18:12.663 "digest": "sha384", 00:18:12.663 "dhgroup": "ffdhe6144" 00:18:12.663 } 00:18:12.663 } 00:18:12.663 ]' 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.663 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:12.664 11:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:12.664 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:12.664 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:12.664 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:12.664 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.664 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.924 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:12.924 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.495 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.495 11:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.756 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.017 00:18:14.017 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.017 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.017 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.277 { 00:18:14.277 "cntlid": 83, 00:18:14.277 "qid": 0, 00:18:14.277 "state": "enabled", 00:18:14.277 "thread": "nvmf_tgt_poll_group_000", 00:18:14.277 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:14.277 "listen_address": { 00:18:14.277 "trtype": "TCP", 00:18:14.277 "adrfam": "IPv4", 00:18:14.277 "traddr": "10.0.0.2", 00:18:14.277 "trsvcid": "4420" 00:18:14.277 }, 00:18:14.277 "peer_address": { 00:18:14.277 "trtype": "TCP", 00:18:14.277 "adrfam": "IPv4", 00:18:14.277 "traddr": "10.0.0.1", 00:18:14.277 "trsvcid": "35014" 00:18:14.277 }, 00:18:14.277 "auth": { 00:18:14.277 "state": "completed", 00:18:14.277 "digest": "sha384", 00:18:14.277 "dhgroup": "ffdhe6144" 00:18:14.277 } 00:18:14.277 } 00:18:14.277 ]' 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.277 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.537 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.537 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.537 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.537 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:14.537 11:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.476 11:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.736 00:18:15.736 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.736 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.736 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.996 { 00:18:15.996 "cntlid": 85, 00:18:15.996 "qid": 0, 00:18:15.996 "state": "enabled", 00:18:15.996 "thread": "nvmf_tgt_poll_group_000", 00:18:15.996 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.996 "listen_address": { 00:18:15.996 "trtype": "TCP", 00:18:15.996 "adrfam": "IPv4", 00:18:15.996 "traddr": "10.0.0.2", 00:18:15.996 "trsvcid": "4420" 00:18:15.996 }, 00:18:15.996 "peer_address": { 00:18:15.996 "trtype": "TCP", 00:18:15.996 "adrfam": "IPv4", 00:18:15.996 "traddr": "10.0.0.1", 00:18:15.996 "trsvcid": "35046" 00:18:15.996 }, 00:18:15.996 "auth": { 00:18:15.996 "state": "completed", 00:18:15.996 "digest": "sha384", 00:18:15.996 "dhgroup": "ffdhe6144" 00:18:15.996 } 00:18:15.996 } 00:18:15.996 ]' 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:15.996 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.256 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.256 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.256 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.256 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:16.256 11:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.195 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.196 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.456 00:18:17.456 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.456 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.456 11:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.716 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.716 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.716 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.716 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.716 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.716 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.716 { 00:18:17.716 "cntlid": 87, 00:18:17.716 "qid": 0, 00:18:17.716 "state": "enabled", 00:18:17.716 "thread": "nvmf_tgt_poll_group_000", 00:18:17.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:17.716 "listen_address": { 00:18:17.716 "trtype": "TCP", 00:18:17.716 "adrfam": "IPv4", 00:18:17.716 "traddr": "10.0.0.2", 00:18:17.716 "trsvcid": "4420" 00:18:17.716 }, 00:18:17.716 "peer_address": { 00:18:17.716 "trtype": "TCP", 00:18:17.716 "adrfam": "IPv4", 00:18:17.716 "traddr": "10.0.0.1", 00:18:17.716 "trsvcid": "43064" 00:18:17.716 }, 00:18:17.716 "auth": { 00:18:17.717 "state": "completed", 00:18:17.717 "digest": "sha384", 00:18:17.717 "dhgroup": "ffdhe6144" 00:18:17.717 } 00:18:17.717 } 00:18:17.717 ]' 00:18:17.717 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.717 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:17.717 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.717 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:17.717 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.977 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.977 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.977 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.977 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:17.977 11:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.917 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.918 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.918 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.918 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.918 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.918 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:18.918 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.487 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.487 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.487 { 00:18:19.487 "cntlid": 89, 00:18:19.487 "qid": 0, 00:18:19.487 "state": "enabled", 00:18:19.487 "thread": "nvmf_tgt_poll_group_000", 00:18:19.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:19.487 "listen_address": { 00:18:19.487 "trtype": "TCP", 00:18:19.487 "adrfam": "IPv4", 00:18:19.487 "traddr": "10.0.0.2", 00:18:19.487 "trsvcid": "4420" 00:18:19.487 }, 00:18:19.487 "peer_address": { 00:18:19.487 "trtype": "TCP", 00:18:19.487 "adrfam": "IPv4", 00:18:19.487 "traddr": "10.0.0.1", 00:18:19.487 "trsvcid": "43088" 00:18:19.487 }, 00:18:19.487 "auth": { 00:18:19.487 "state": "completed", 00:18:19.487 "digest": "sha384", 00:18:19.487 "dhgroup": "ffdhe8192" 00:18:19.487 } 00:18:19.488 } 00:18:19.488 ]' 00:18:19.488 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.488 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:19.488 11:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:19.748 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.689 11:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.689 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.261 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:21.261 { 00:18:21.261 "cntlid": 91, 00:18:21.261 "qid": 0, 00:18:21.261 "state": "enabled", 00:18:21.261 "thread": "nvmf_tgt_poll_group_000", 00:18:21.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:21.261 "listen_address": { 00:18:21.261 "trtype": "TCP", 00:18:21.261 "adrfam": "IPv4", 00:18:21.261 "traddr": "10.0.0.2", 00:18:21.261 "trsvcid": "4420" 00:18:21.261 }, 00:18:21.261 "peer_address": { 00:18:21.261 "trtype": "TCP", 00:18:21.261 "adrfam": "IPv4", 00:18:21.261 "traddr": "10.0.0.1", 00:18:21.261 "trsvcid": "43112" 00:18:21.261 }, 00:18:21.261 "auth": { 00:18:21.261 "state": "completed", 00:18:21.261 "digest": "sha384", 00:18:21.261 "dhgroup": "ffdhe8192" 00:18:21.261 } 00:18:21.261 } 00:18:21.261 ]' 00:18:21.261 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.522 11:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.783 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:21.783 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.353 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.614 11:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.185 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.185 { 00:18:23.185 "cntlid": 93, 00:18:23.185 "qid": 0, 00:18:23.185 "state": "enabled", 00:18:23.185 "thread": "nvmf_tgt_poll_group_000", 00:18:23.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.185 "listen_address": { 00:18:23.185 "trtype": "TCP", 00:18:23.185 "adrfam": "IPv4", 00:18:23.185 "traddr": "10.0.0.2", 00:18:23.185 "trsvcid": "4420" 00:18:23.185 }, 00:18:23.185 "peer_address": { 00:18:23.185 "trtype": "TCP", 00:18:23.185 "adrfam": "IPv4", 00:18:23.185 "traddr": "10.0.0.1", 00:18:23.185 "trsvcid": "43136" 00:18:23.185 }, 00:18:23.185 "auth": { 00:18:23.185 "state": "completed", 00:18:23.185 "digest": "sha384", 00:18:23.185 "dhgroup": "ffdhe8192" 00:18:23.185 } 00:18:23.185 } 00:18:23.185 ]' 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:23.185 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:23.446 11:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.388 11:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:24.959 00:18:24.959 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.959 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.959 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.219 { 00:18:25.219 "cntlid": 95, 00:18:25.219 "qid": 0, 00:18:25.219 "state": "enabled", 00:18:25.219 "thread": "nvmf_tgt_poll_group_000", 00:18:25.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.219 "listen_address": { 00:18:25.219 "trtype": "TCP", 00:18:25.219 "adrfam": "IPv4", 00:18:25.219 "traddr": "10.0.0.2", 00:18:25.219 "trsvcid": "4420" 00:18:25.219 }, 00:18:25.219 "peer_address": { 00:18:25.219 "trtype": "TCP", 00:18:25.219 "adrfam": "IPv4", 00:18:25.219 "traddr": "10.0.0.1", 00:18:25.219 "trsvcid": "43166" 00:18:25.219 }, 00:18:25.219 "auth": { 00:18:25.219 "state": "completed", 00:18:25.219 "digest": "sha384", 00:18:25.219 "dhgroup": "ffdhe8192" 00:18:25.219 } 00:18:25.219 } 00:18:25.219 ]' 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.219 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.220 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.220 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.220 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.220 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.220 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.481 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:25.481 11:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.050 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.050 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.311 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.572 00:18:26.572 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.572 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.572 11:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:26.834 { 00:18:26.834 "cntlid": 97, 00:18:26.834 "qid": 0, 00:18:26.834 "state": "enabled", 00:18:26.834 "thread": "nvmf_tgt_poll_group_000", 00:18:26.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:26.834 "listen_address": { 00:18:26.834 "trtype": "TCP", 00:18:26.834 "adrfam": "IPv4", 00:18:26.834 "traddr": "10.0.0.2", 00:18:26.834 "trsvcid": "4420" 00:18:26.834 }, 00:18:26.834 "peer_address": { 00:18:26.834 "trtype": "TCP", 00:18:26.834 "adrfam": "IPv4", 00:18:26.834 "traddr": "10.0.0.1", 00:18:26.834 "trsvcid": "43192" 00:18:26.834 }, 00:18:26.834 "auth": { 00:18:26.834 "state": "completed", 00:18:26.834 "digest": "sha512", 00:18:26.834 "dhgroup": "null" 00:18:26.834 } 00:18:26.834 } 00:18:26.834 ]' 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.834 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.095 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:27.095 11:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.666 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.927 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.928 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.928 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.928 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.928 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.188 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.188 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.188 { 00:18:28.188 "cntlid": 99, 00:18:28.188 "qid": 0, 00:18:28.188 "state": "enabled", 00:18:28.188 "thread": "nvmf_tgt_poll_group_000", 00:18:28.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:28.188 "listen_address": { 00:18:28.188 "trtype": "TCP", 00:18:28.188 "adrfam": "IPv4", 00:18:28.189 "traddr": "10.0.0.2", 00:18:28.189 "trsvcid": "4420" 00:18:28.189 }, 00:18:28.189 "peer_address": { 00:18:28.189 "trtype": "TCP", 00:18:28.189 "adrfam": "IPv4", 00:18:28.189 "traddr": "10.0.0.1", 00:18:28.189 "trsvcid": "33580" 00:18:28.189 }, 00:18:28.189 "auth": { 00:18:28.189 "state": "completed", 00:18:28.189 "digest": "sha512", 00:18:28.189 "dhgroup": "null" 00:18:28.189 } 00:18:28.189 } 00:18:28.189 ]' 00:18:28.189 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.450 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.711 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:28.711 11:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.283 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.544 11:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.544 00:18:29.805 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.805 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.805 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.806 { 00:18:29.806 "cntlid": 101, 00:18:29.806 "qid": 0, 00:18:29.806 "state": "enabled", 00:18:29.806 "thread": "nvmf_tgt_poll_group_000", 00:18:29.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.806 "listen_address": { 00:18:29.806 "trtype": "TCP", 00:18:29.806 "adrfam": "IPv4", 00:18:29.806 "traddr": "10.0.0.2", 00:18:29.806 "trsvcid": "4420" 00:18:29.806 }, 00:18:29.806 "peer_address": { 00:18:29.806 "trtype": "TCP", 00:18:29.806 "adrfam": "IPv4", 00:18:29.806 "traddr": "10.0.0.1", 00:18:29.806 "trsvcid": "33624" 00:18:29.806 }, 00:18:29.806 "auth": { 00:18:29.806 "state": "completed", 00:18:29.806 "digest": "sha512", 00:18:29.806 "dhgroup": "null" 00:18:29.806 } 00:18:29.806 } 00:18:29.806 ]' 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.806 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:30.067 11:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.011 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.272 00:18:31.272 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.272 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.272 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.533 { 00:18:31.533 "cntlid": 103, 00:18:31.533 "qid": 0, 00:18:31.533 "state": "enabled", 00:18:31.533 "thread": "nvmf_tgt_poll_group_000", 00:18:31.533 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.533 "listen_address": { 00:18:31.533 "trtype": "TCP", 00:18:31.533 "adrfam": "IPv4", 00:18:31.533 "traddr": "10.0.0.2", 00:18:31.533 "trsvcid": "4420" 00:18:31.533 }, 00:18:31.533 "peer_address": { 00:18:31.533 "trtype": "TCP", 00:18:31.533 "adrfam": "IPv4", 00:18:31.533 "traddr": "10.0.0.1", 00:18:31.533 "trsvcid": "33656" 00:18:31.533 }, 00:18:31.533 "auth": { 00:18:31.533 "state": "completed", 00:18:31.533 "digest": "sha512", 00:18:31.533 "dhgroup": "null" 00:18:31.533 } 00:18:31.533 } 00:18:31.533 ]' 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.533 11:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.794 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:31.794 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.366 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.627 11:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.888 00:18:32.888 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.888 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.888 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.148 { 00:18:33.148 "cntlid": 105, 00:18:33.148 "qid": 0, 00:18:33.148 "state": "enabled", 00:18:33.148 "thread": "nvmf_tgt_poll_group_000", 00:18:33.148 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.148 "listen_address": { 00:18:33.148 "trtype": "TCP", 00:18:33.148 "adrfam": "IPv4", 00:18:33.148 "traddr": "10.0.0.2", 00:18:33.148 "trsvcid": "4420" 00:18:33.148 }, 00:18:33.148 "peer_address": { 00:18:33.148 "trtype": "TCP", 00:18:33.148 "adrfam": "IPv4", 00:18:33.148 "traddr": "10.0.0.1", 00:18:33.148 "trsvcid": "33674" 00:18:33.148 }, 00:18:33.148 "auth": { 00:18:33.148 "state": "completed", 00:18:33.148 "digest": "sha512", 00:18:33.148 "dhgroup": "ffdhe2048" 00:18:33.148 } 00:18:33.148 } 00:18:33.148 ]' 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.148 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.410 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:33.411 11:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:33.983 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.244 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.505 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.505 11:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.765 { 00:18:34.765 "cntlid": 107, 00:18:34.765 "qid": 0, 00:18:34.765 "state": "enabled", 00:18:34.765 "thread": "nvmf_tgt_poll_group_000", 00:18:34.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:34.765 "listen_address": { 00:18:34.765 "trtype": "TCP", 00:18:34.765 "adrfam": "IPv4", 00:18:34.765 "traddr": "10.0.0.2", 00:18:34.765 "trsvcid": "4420" 00:18:34.765 }, 00:18:34.765 "peer_address": { 00:18:34.765 "trtype": "TCP", 00:18:34.765 "adrfam": "IPv4", 00:18:34.765 "traddr": "10.0.0.1", 00:18:34.765 "trsvcid": "33698" 00:18:34.765 }, 00:18:34.765 "auth": { 00:18:34.765 "state": "completed", 00:18:34.765 "digest": "sha512", 00:18:34.765 "dhgroup": "ffdhe2048" 00:18:34.765 } 00:18:34.765 } 00:18:34.765 ]' 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.765 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.025 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:35.025 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:35.595 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.595 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.595 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.595 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.595 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.595 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:35.596 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.596 11:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.856 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.115 00:18:36.115 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.115 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.115 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:36.116 { 00:18:36.116 "cntlid": 109, 00:18:36.116 "qid": 0, 00:18:36.116 "state": "enabled", 00:18:36.116 "thread": "nvmf_tgt_poll_group_000", 00:18:36.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:36.116 "listen_address": { 00:18:36.116 "trtype": "TCP", 00:18:36.116 "adrfam": "IPv4", 00:18:36.116 "traddr": "10.0.0.2", 00:18:36.116 "trsvcid": "4420" 00:18:36.116 }, 00:18:36.116 "peer_address": { 00:18:36.116 "trtype": "TCP", 00:18:36.116 "adrfam": "IPv4", 00:18:36.116 "traddr": "10.0.0.1", 00:18:36.116 "trsvcid": "33722" 00:18:36.116 }, 00:18:36.116 "auth": { 00:18:36.116 "state": "completed", 00:18:36.116 "digest": "sha512", 00:18:36.116 "dhgroup": "ffdhe2048" 00:18:36.116 } 00:18:36.116 } 00:18:36.116 ]' 00:18:36.116 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.375 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.635 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:36.635 11:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.206 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.467 11:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:37.728 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.728 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.989 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.989 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.989 { 00:18:37.989 "cntlid": 111, 00:18:37.989 "qid": 0, 00:18:37.989 "state": "enabled", 00:18:37.989 "thread": "nvmf_tgt_poll_group_000", 00:18:37.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.989 "listen_address": { 00:18:37.989 "trtype": "TCP", 00:18:37.989 "adrfam": "IPv4", 00:18:37.989 "traddr": "10.0.0.2", 00:18:37.989 "trsvcid": "4420" 00:18:37.989 }, 00:18:37.989 "peer_address": { 00:18:37.989 "trtype": "TCP", 00:18:37.989 "adrfam": "IPv4", 00:18:37.989 "traddr": "10.0.0.1", 00:18:37.989 "trsvcid": "33752" 00:18:37.989 }, 00:18:37.989 "auth": { 00:18:37.989 "state": "completed", 00:18:37.989 "digest": "sha512", 00:18:37.989 "dhgroup": "ffdhe2048" 00:18:37.989 } 00:18:37.989 } 00:18:37.989 ]' 00:18:37.989 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.989 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.989 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.989 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.990 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.990 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.990 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.990 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.249 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:38.250 11:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.823 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:38.823 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.084 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.346 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:39.346 { 00:18:39.346 "cntlid": 113, 00:18:39.346 "qid": 0, 00:18:39.346 "state": "enabled", 00:18:39.346 "thread": "nvmf_tgt_poll_group_000", 00:18:39.346 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.346 "listen_address": { 00:18:39.346 "trtype": "TCP", 00:18:39.346 "adrfam": "IPv4", 00:18:39.346 "traddr": "10.0.0.2", 00:18:39.346 "trsvcid": "4420" 00:18:39.346 }, 00:18:39.346 "peer_address": { 00:18:39.346 "trtype": "TCP", 00:18:39.346 "adrfam": "IPv4", 00:18:39.346 "traddr": "10.0.0.1", 00:18:39.346 "trsvcid": "33784" 00:18:39.346 }, 00:18:39.346 "auth": { 00:18:39.346 "state": "completed", 00:18:39.346 "digest": "sha512", 00:18:39.346 "dhgroup": "ffdhe3072" 00:18:39.346 } 00:18:39.346 } 00:18:39.346 ]' 00:18:39.346 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.609 11:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.870 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:39.870 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.572 11:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.865 00:18:40.865 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.865 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.865 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:41.126 { 00:18:41.126 "cntlid": 115, 00:18:41.126 "qid": 0, 00:18:41.126 "state": "enabled", 00:18:41.126 "thread": "nvmf_tgt_poll_group_000", 00:18:41.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:41.126 "listen_address": { 00:18:41.126 "trtype": "TCP", 00:18:41.126 "adrfam": "IPv4", 00:18:41.126 "traddr": "10.0.0.2", 00:18:41.126 "trsvcid": "4420" 00:18:41.126 }, 00:18:41.126 "peer_address": { 00:18:41.126 "trtype": "TCP", 00:18:41.126 "adrfam": "IPv4", 00:18:41.126 "traddr": "10.0.0.1", 00:18:41.126 "trsvcid": "33800" 00:18:41.126 }, 00:18:41.126 "auth": { 00:18:41.126 "state": "completed", 00:18:41.126 "digest": "sha512", 00:18:41.126 "dhgroup": "ffdhe3072" 00:18:41.126 } 00:18:41.126 } 00:18:41.126 ]' 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.126 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.389 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:41.389 11:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:41.961 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.221 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.482 00:18:42.482 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.482 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.482 11:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.742 { 00:18:42.742 "cntlid": 117, 00:18:42.742 "qid": 0, 00:18:42.742 "state": "enabled", 00:18:42.742 "thread": "nvmf_tgt_poll_group_000", 00:18:42.742 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.742 "listen_address": { 00:18:42.742 "trtype": "TCP", 00:18:42.742 "adrfam": "IPv4", 00:18:42.742 "traddr": "10.0.0.2", 00:18:42.742 "trsvcid": "4420" 00:18:42.742 }, 00:18:42.742 "peer_address": { 00:18:42.742 "trtype": "TCP", 00:18:42.742 "adrfam": "IPv4", 00:18:42.742 "traddr": "10.0.0.1", 00:18:42.742 "trsvcid": "33830" 00:18:42.742 }, 00:18:42.742 "auth": { 00:18:42.742 "state": "completed", 00:18:42.742 "digest": "sha512", 00:18:42.742 "dhgroup": "ffdhe3072" 00:18:42.742 } 00:18:42.742 } 00:18:42.742 ]' 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.742 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.003 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:43.003 11:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.575 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.837 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.099 00:18:44.099 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:44.099 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.099 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:44.360 { 00:18:44.360 "cntlid": 119, 00:18:44.360 "qid": 0, 00:18:44.360 "state": "enabled", 00:18:44.360 "thread": "nvmf_tgt_poll_group_000", 00:18:44.360 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.360 "listen_address": { 00:18:44.360 "trtype": "TCP", 00:18:44.360 "adrfam": "IPv4", 00:18:44.360 "traddr": "10.0.0.2", 00:18:44.360 "trsvcid": "4420" 00:18:44.360 }, 00:18:44.360 "peer_address": { 00:18:44.360 "trtype": "TCP", 00:18:44.360 "adrfam": "IPv4", 00:18:44.360 "traddr": "10.0.0.1", 00:18:44.360 "trsvcid": "33860" 00:18:44.360 }, 00:18:44.360 "auth": { 00:18:44.360 "state": "completed", 00:18:44.360 "digest": "sha512", 00:18:44.360 "dhgroup": "ffdhe3072" 00:18:44.360 } 00:18:44.360 } 00:18:44.360 ]' 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.360 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.622 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:44.622 11:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.194 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.456 11:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.718 00:18:45.718 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.718 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.718 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.979 { 00:18:45.979 "cntlid": 121, 00:18:45.979 "qid": 0, 00:18:45.979 "state": "enabled", 00:18:45.979 "thread": "nvmf_tgt_poll_group_000", 00:18:45.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:45.979 "listen_address": { 00:18:45.979 "trtype": "TCP", 00:18:45.979 "adrfam": "IPv4", 00:18:45.979 "traddr": "10.0.0.2", 00:18:45.979 "trsvcid": "4420" 00:18:45.979 }, 00:18:45.979 "peer_address": { 00:18:45.979 "trtype": "TCP", 00:18:45.979 "adrfam": "IPv4", 00:18:45.979 "traddr": "10.0.0.1", 00:18:45.979 "trsvcid": "33886" 00:18:45.979 }, 00:18:45.979 "auth": { 00:18:45.979 "state": "completed", 00:18:45.979 "digest": "sha512", 00:18:45.979 "dhgroup": "ffdhe4096" 00:18:45.979 } 00:18:45.979 } 00:18:45.979 ]' 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.979 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.241 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:46.241 11:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:46.813 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.813 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:46.813 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.813 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.813 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.813 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.814 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:46.814 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.075 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.337 00:18:47.337 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.337 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.337 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.598 { 00:18:47.598 "cntlid": 123, 00:18:47.598 "qid": 0, 00:18:47.598 "state": "enabled", 00:18:47.598 "thread": "nvmf_tgt_poll_group_000", 00:18:47.598 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:47.598 "listen_address": { 00:18:47.598 "trtype": "TCP", 00:18:47.598 "adrfam": "IPv4", 00:18:47.598 "traddr": "10.0.0.2", 00:18:47.598 "trsvcid": "4420" 00:18:47.598 }, 00:18:47.598 "peer_address": { 00:18:47.598 "trtype": "TCP", 00:18:47.598 "adrfam": "IPv4", 00:18:47.598 "traddr": "10.0.0.1", 00:18:47.598 "trsvcid": "47804" 00:18:47.598 }, 00:18:47.598 "auth": { 00:18:47.598 "state": "completed", 00:18:47.598 "digest": "sha512", 00:18:47.598 "dhgroup": "ffdhe4096" 00:18:47.598 } 00:18:47.598 } 00:18:47.598 ]' 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:47.598 11:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.598 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:47.598 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.598 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.598 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.598 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.859 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:47.859 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.433 11:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.695 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.956 00:18:48.956 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.956 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.956 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:49.217 { 00:18:49.217 "cntlid": 125, 00:18:49.217 "qid": 0, 00:18:49.217 "state": "enabled", 00:18:49.217 "thread": "nvmf_tgt_poll_group_000", 00:18:49.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:49.217 "listen_address": { 00:18:49.217 "trtype": "TCP", 00:18:49.217 "adrfam": "IPv4", 00:18:49.217 "traddr": "10.0.0.2", 00:18:49.217 "trsvcid": "4420" 00:18:49.217 }, 00:18:49.217 "peer_address": { 00:18:49.217 "trtype": "TCP", 00:18:49.217 "adrfam": "IPv4", 00:18:49.217 "traddr": "10.0.0.1", 00:18:49.217 "trsvcid": "47838" 00:18:49.217 }, 00:18:49.217 "auth": { 00:18:49.217 "state": "completed", 00:18:49.217 "digest": "sha512", 00:18:49.217 "dhgroup": "ffdhe4096" 00:18:49.217 } 00:18:49.217 } 00:18:49.217 ]' 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.217 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.478 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:49.478 11:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:50.049 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.049 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.049 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.049 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.308 11:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:50.567 00:18:50.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.567 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.827 { 00:18:50.827 "cntlid": 127, 00:18:50.827 "qid": 0, 00:18:50.827 "state": "enabled", 00:18:50.827 "thread": "nvmf_tgt_poll_group_000", 00:18:50.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:50.827 "listen_address": { 00:18:50.827 "trtype": "TCP", 00:18:50.827 "adrfam": "IPv4", 00:18:50.827 "traddr": "10.0.0.2", 00:18:50.827 "trsvcid": "4420" 00:18:50.827 }, 00:18:50.827 "peer_address": { 00:18:50.827 "trtype": "TCP", 00:18:50.827 "adrfam": "IPv4", 00:18:50.827 "traddr": "10.0.0.1", 00:18:50.827 "trsvcid": "47860" 00:18:50.827 }, 00:18:50.827 "auth": { 00:18:50.827 "state": "completed", 00:18:50.827 "digest": "sha512", 00:18:50.827 "dhgroup": "ffdhe4096" 00:18:50.827 } 00:18:50.827 } 00:18:50.827 ]' 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:50.827 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:51.088 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.088 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.088 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.088 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:51.088 11:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:51.661 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:51.922 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.497 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.497 { 00:18:52.497 "cntlid": 129, 00:18:52.497 "qid": 0, 00:18:52.497 "state": "enabled", 00:18:52.497 "thread": "nvmf_tgt_poll_group_000", 00:18:52.497 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:52.497 "listen_address": { 00:18:52.497 "trtype": "TCP", 00:18:52.497 "adrfam": "IPv4", 00:18:52.497 "traddr": "10.0.0.2", 00:18:52.497 "trsvcid": "4420" 00:18:52.497 }, 00:18:52.497 "peer_address": { 00:18:52.497 "trtype": "TCP", 00:18:52.497 "adrfam": "IPv4", 00:18:52.497 "traddr": "10.0.0.1", 00:18:52.497 "trsvcid": "47884" 00:18:52.497 }, 00:18:52.497 "auth": { 00:18:52.497 "state": "completed", 00:18:52.497 "digest": "sha512", 00:18:52.497 "dhgroup": "ffdhe6144" 00:18:52.497 } 00:18:52.497 } 00:18:52.497 ]' 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.497 11:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:52.759 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.704 11:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.704 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.967 00:18:53.967 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.967 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.967 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:54.228 { 00:18:54.228 "cntlid": 131, 00:18:54.228 "qid": 0, 00:18:54.228 "state": "enabled", 00:18:54.228 "thread": "nvmf_tgt_poll_group_000", 00:18:54.228 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:54.228 "listen_address": { 00:18:54.228 "trtype": "TCP", 00:18:54.228 "adrfam": "IPv4", 00:18:54.228 "traddr": "10.0.0.2", 00:18:54.228 "trsvcid": "4420" 00:18:54.228 }, 00:18:54.228 "peer_address": { 00:18:54.228 "trtype": "TCP", 00:18:54.228 "adrfam": "IPv4", 00:18:54.228 "traddr": "10.0.0.1", 00:18:54.228 "trsvcid": "47926" 00:18:54.228 }, 00:18:54.228 "auth": { 00:18:54.228 "state": "completed", 00:18:54.228 "digest": "sha512", 00:18:54.228 "dhgroup": "ffdhe6144" 00:18:54.228 } 00:18:54.228 } 00:18:54.228 ]' 00:18:54.228 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:54.229 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:54.229 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:54.229 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:54.229 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:54.490 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.490 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.490 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.490 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:54.490 11:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.433 11:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.694 00:18:55.694 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.694 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.694 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.955 { 00:18:55.955 "cntlid": 133, 00:18:55.955 "qid": 0, 00:18:55.955 "state": "enabled", 00:18:55.955 "thread": "nvmf_tgt_poll_group_000", 00:18:55.955 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:55.955 "listen_address": { 00:18:55.955 "trtype": "TCP", 00:18:55.955 "adrfam": "IPv4", 00:18:55.955 "traddr": "10.0.0.2", 00:18:55.955 "trsvcid": "4420" 00:18:55.955 }, 00:18:55.955 "peer_address": { 00:18:55.955 "trtype": "TCP", 00:18:55.955 "adrfam": "IPv4", 00:18:55.955 "traddr": "10.0.0.1", 00:18:55.955 "trsvcid": "47950" 00:18:55.955 }, 00:18:55.955 "auth": { 00:18:55.955 "state": "completed", 00:18:55.955 "digest": "sha512", 00:18:55.955 "dhgroup": "ffdhe6144" 00:18:55.955 } 00:18:55.955 } 00:18:55.955 ]' 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:55.955 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:56.217 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.217 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.217 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.217 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:56.217 11:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.160 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.421 00:18:57.421 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.421 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.421 11:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.682 { 00:18:57.682 "cntlid": 135, 00:18:57.682 "qid": 0, 00:18:57.682 "state": "enabled", 00:18:57.682 "thread": "nvmf_tgt_poll_group_000", 00:18:57.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:57.682 "listen_address": { 00:18:57.682 "trtype": "TCP", 00:18:57.682 "adrfam": "IPv4", 00:18:57.682 "traddr": "10.0.0.2", 00:18:57.682 "trsvcid": "4420" 00:18:57.682 }, 00:18:57.682 "peer_address": { 00:18:57.682 "trtype": "TCP", 00:18:57.682 "adrfam": "IPv4", 00:18:57.682 "traddr": "10.0.0.1", 00:18:57.682 "trsvcid": "53070" 00:18:57.682 }, 00:18:57.682 "auth": { 00:18:57.682 "state": "completed", 00:18:57.682 "digest": "sha512", 00:18:57.682 "dhgroup": "ffdhe6144" 00:18:57.682 } 00:18:57.682 } 00:18:57.682 ]' 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.682 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.943 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.943 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.943 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.943 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:57.943 11:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:18:58.514 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.776 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.347 00:18:59.347 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.347 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.347 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.608 { 00:18:59.608 "cntlid": 137, 00:18:59.608 "qid": 0, 00:18:59.608 "state": "enabled", 00:18:59.608 "thread": "nvmf_tgt_poll_group_000", 00:18:59.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:59.608 "listen_address": { 00:18:59.608 "trtype": "TCP", 00:18:59.608 "adrfam": "IPv4", 00:18:59.608 "traddr": "10.0.0.2", 00:18:59.608 "trsvcid": "4420" 00:18:59.608 }, 00:18:59.608 "peer_address": { 00:18:59.608 "trtype": "TCP", 00:18:59.608 "adrfam": "IPv4", 00:18:59.608 "traddr": "10.0.0.1", 00:18:59.608 "trsvcid": "53096" 00:18:59.608 }, 00:18:59.608 "auth": { 00:18:59.608 "state": "completed", 00:18:59.608 "digest": "sha512", 00:18:59.608 "dhgroup": "ffdhe8192" 00:18:59.608 } 00:18:59.608 } 00:18:59.608 ]' 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:59.608 11:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.608 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:59.608 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.608 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.608 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.608 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.868 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:18:59.868 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:19:00.439 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.440 11:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.700 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.270 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.270 { 00:19:01.270 "cntlid": 139, 00:19:01.270 "qid": 0, 00:19:01.270 "state": "enabled", 00:19:01.270 "thread": "nvmf_tgt_poll_group_000", 00:19:01.270 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:01.270 "listen_address": { 00:19:01.270 "trtype": "TCP", 00:19:01.270 "adrfam": "IPv4", 00:19:01.270 "traddr": "10.0.0.2", 00:19:01.270 "trsvcid": "4420" 00:19:01.270 }, 00:19:01.270 "peer_address": { 00:19:01.270 "trtype": "TCP", 00:19:01.270 "adrfam": "IPv4", 00:19:01.270 "traddr": "10.0.0.1", 00:19:01.270 "trsvcid": "53126" 00:19:01.270 }, 00:19:01.270 "auth": { 00:19:01.270 "state": "completed", 00:19:01.270 "digest": "sha512", 00:19:01.270 "dhgroup": "ffdhe8192" 00:19:01.270 } 00:19:01.270 } 00:19:01.270 ]' 00:19:01.270 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.530 11:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.790 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:19:01.790 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: --dhchap-ctrl-secret DHHC-1:02:MjExN2Y5YTQ0NzhiNmMyY2ZiYmVhYzE1MzZlYjg1N2M0MWRmYzUyM2ZmOWYxNTA3CmvkOw==: 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.360 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.621 11:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.882 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.143 { 00:19:03.143 "cntlid": 141, 00:19:03.143 "qid": 0, 00:19:03.143 "state": "enabled", 00:19:03.143 "thread": "nvmf_tgt_poll_group_000", 00:19:03.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:03.143 "listen_address": { 00:19:03.143 "trtype": "TCP", 00:19:03.143 "adrfam": "IPv4", 00:19:03.143 "traddr": "10.0.0.2", 00:19:03.143 "trsvcid": "4420" 00:19:03.143 }, 00:19:03.143 "peer_address": { 00:19:03.143 "trtype": "TCP", 00:19:03.143 "adrfam": "IPv4", 00:19:03.143 "traddr": "10.0.0.1", 00:19:03.143 "trsvcid": "53142" 00:19:03.143 }, 00:19:03.143 "auth": { 00:19:03.143 "state": "completed", 00:19:03.143 "digest": "sha512", 00:19:03.143 "dhgroup": "ffdhe8192" 00:19:03.143 } 00:19:03.143 } 00:19:03.143 ]' 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:03.143 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.405 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:03.405 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.405 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.405 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.405 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.666 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:19:03.666 11:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:01:NWI3YjA0Yzc1ZTlmYTEyYTRlMDAzYTA3ZThiMmEyYWQbe3Q6: 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.236 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.236 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.497 11:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.759 00:19:04.759 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.759 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.759 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:05.019 { 00:19:05.019 "cntlid": 143, 00:19:05.019 "qid": 0, 00:19:05.019 "state": "enabled", 00:19:05.019 "thread": "nvmf_tgt_poll_group_000", 00:19:05.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:05.019 "listen_address": { 00:19:05.019 "trtype": "TCP", 00:19:05.019 "adrfam": "IPv4", 00:19:05.019 "traddr": "10.0.0.2", 00:19:05.019 "trsvcid": "4420" 00:19:05.019 }, 00:19:05.019 "peer_address": { 00:19:05.019 "trtype": "TCP", 00:19:05.019 "adrfam": "IPv4", 00:19:05.019 "traddr": "10.0.0.1", 00:19:05.019 "trsvcid": "53164" 00:19:05.019 }, 00:19:05.019 "auth": { 00:19:05.019 "state": "completed", 00:19:05.019 "digest": "sha512", 00:19:05.019 "dhgroup": "ffdhe8192" 00:19:05.019 } 00:19:05.019 } 00:19:05.019 ]' 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:05.019 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:19:05.280 11:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.221 11:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.793 00:19:06.793 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.793 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.793 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.053 { 00:19:07.053 "cntlid": 145, 00:19:07.053 "qid": 0, 00:19:07.053 "state": "enabled", 00:19:07.053 "thread": "nvmf_tgt_poll_group_000", 00:19:07.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:07.053 "listen_address": { 00:19:07.053 "trtype": "TCP", 00:19:07.053 "adrfam": "IPv4", 00:19:07.053 "traddr": "10.0.0.2", 00:19:07.053 "trsvcid": "4420" 00:19:07.053 }, 00:19:07.053 "peer_address": { 00:19:07.053 "trtype": "TCP", 00:19:07.053 "adrfam": "IPv4", 00:19:07.053 "traddr": "10.0.0.1", 00:19:07.053 "trsvcid": "53194" 00:19:07.053 }, 00:19:07.053 "auth": { 00:19:07.053 "state": "completed", 00:19:07.053 "digest": "sha512", 00:19:07.053 "dhgroup": "ffdhe8192" 00:19:07.053 } 00:19:07.053 } 00:19:07.053 ]' 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.053 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.313 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:19:07.313 11:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:NmU0MTRjYWU3MTVkYjU3ZjAzOGU2Y2U3YTM5ZDMzMmEzMzBlNjhjOWU5YmU2MmFhQ/wRzQ==: --dhchap-ctrl-secret DHHC-1:03:NDEwZWFiOTkwYzBlMGM4MDI3ZWU1OTdmNTkyNmM5MGQxZmQ3Y2RhNTg1ODhmNDgyOGJmMDFlZWI3N2MzYzU2YTRXPFQ=: 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.884 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:07.884 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:19:08.456 request: 00:19:08.456 { 00:19:08.456 "name": "nvme0", 00:19:08.456 "trtype": "tcp", 00:19:08.456 "traddr": "10.0.0.2", 00:19:08.456 "adrfam": "ipv4", 00:19:08.456 "trsvcid": "4420", 00:19:08.456 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:08.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:08.456 "prchk_reftag": false, 00:19:08.456 "prchk_guard": false, 00:19:08.456 "hdgst": false, 00:19:08.456 "ddgst": false, 00:19:08.456 "dhchap_key": "key2", 00:19:08.456 "allow_unrecognized_csi": false, 00:19:08.456 "method": "bdev_nvme_attach_controller", 00:19:08.456 "req_id": 1 00:19:08.456 } 00:19:08.456 Got JSON-RPC error response 00:19:08.456 response: 00:19:08.456 { 00:19:08.456 "code": -5, 00:19:08.456 "message": "Input/output error" 00:19:08.456 } 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:08.456 11:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:09.029 request: 00:19:09.029 { 00:19:09.029 "name": "nvme0", 00:19:09.029 "trtype": "tcp", 00:19:09.029 "traddr": "10.0.0.2", 00:19:09.029 "adrfam": "ipv4", 00:19:09.029 "trsvcid": "4420", 00:19:09.029 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.029 "prchk_reftag": false, 00:19:09.029 "prchk_guard": false, 00:19:09.029 "hdgst": false, 00:19:09.029 "ddgst": false, 00:19:09.029 "dhchap_key": "key1", 00:19:09.029 "dhchap_ctrlr_key": "ckey2", 00:19:09.029 "allow_unrecognized_csi": false, 00:19:09.029 "method": "bdev_nvme_attach_controller", 00:19:09.029 "req_id": 1 00:19:09.029 } 00:19:09.029 Got JSON-RPC error response 00:19:09.029 response: 00:19:09.029 { 00:19:09.029 "code": -5, 00:19:09.029 "message": "Input/output error" 00:19:09.029 } 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.029 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.030 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:09.290 request: 00:19:09.290 { 00:19:09.290 "name": "nvme0", 00:19:09.290 "trtype": "tcp", 00:19:09.290 "traddr": "10.0.0.2", 00:19:09.290 "adrfam": "ipv4", 00:19:09.290 "trsvcid": "4420", 00:19:09.290 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:09.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:09.290 "prchk_reftag": false, 00:19:09.290 "prchk_guard": false, 00:19:09.290 "hdgst": false, 00:19:09.290 "ddgst": false, 00:19:09.290 "dhchap_key": "key1", 00:19:09.290 "dhchap_ctrlr_key": "ckey1", 00:19:09.290 "allow_unrecognized_csi": false, 00:19:09.290 "method": "bdev_nvme_attach_controller", 00:19:09.290 "req_id": 1 00:19:09.290 } 00:19:09.290 Got JSON-RPC error response 00:19:09.290 response: 00:19:09.290 { 00:19:09.290 "code": -5, 00:19:09.290 "message": "Input/output error" 00:19:09.290 } 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1045210 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1045210 ']' 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1045210 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:09.290 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1045210 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1045210' 00:19:09.552 killing process with pid 1045210 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1045210 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1045210 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1071558 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1071558 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1071558 ']' 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:09.552 11:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1071558 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 1071558 ']' 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:10.495 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.756 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:10.756 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:19:10.756 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:19:10.756 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.756 11:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.756 null0 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.J3I 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7LP ]] 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7LP 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dcu 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.LcL ]] 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.LcL 00:19:10.756 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.m21 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.dCm ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.dCm 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.SFx 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.757 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:11.698 nvme0n1 00:19:11.698 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:11.698 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:11.698 11:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.698 { 00:19:11.698 "cntlid": 1, 00:19:11.698 "qid": 0, 00:19:11.698 "state": "enabled", 00:19:11.698 "thread": "nvmf_tgt_poll_group_000", 00:19:11.698 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:11.698 "listen_address": { 00:19:11.698 "trtype": "TCP", 00:19:11.698 "adrfam": "IPv4", 00:19:11.698 "traddr": "10.0.0.2", 00:19:11.698 "trsvcid": "4420" 00:19:11.698 }, 00:19:11.698 "peer_address": { 00:19:11.698 "trtype": "TCP", 00:19:11.698 "adrfam": "IPv4", 00:19:11.698 "traddr": "10.0.0.1", 00:19:11.698 "trsvcid": "35406" 00:19:11.698 }, 00:19:11.698 "auth": { 00:19:11.698 "state": "completed", 00:19:11.698 "digest": "sha512", 00:19:11.698 "dhgroup": "ffdhe8192" 00:19:11.698 } 00:19:11.698 } 00:19:11.698 ]' 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:11.698 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:19:11.960 11:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:12.903 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.165 request: 00:19:13.165 { 00:19:13.165 "name": "nvme0", 00:19:13.165 "trtype": "tcp", 00:19:13.165 "traddr": "10.0.0.2", 00:19:13.165 "adrfam": "ipv4", 00:19:13.165 "trsvcid": "4420", 00:19:13.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.165 "prchk_reftag": false, 00:19:13.165 "prchk_guard": false, 00:19:13.165 "hdgst": false, 00:19:13.165 "ddgst": false, 00:19:13.165 "dhchap_key": "key3", 00:19:13.165 "allow_unrecognized_csi": false, 00:19:13.165 "method": "bdev_nvme_attach_controller", 00:19:13.165 "req_id": 1 00:19:13.165 } 00:19:13.165 Got JSON-RPC error response 00:19:13.165 response: 00:19:13.165 { 00:19:13.165 "code": -5, 00:19:13.165 "message": "Input/output error" 00:19:13.165 } 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:19:13.165 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:13.426 request: 00:19:13.426 { 00:19:13.426 "name": "nvme0", 00:19:13.426 "trtype": "tcp", 00:19:13.426 "traddr": "10.0.0.2", 00:19:13.426 "adrfam": "ipv4", 00:19:13.426 "trsvcid": "4420", 00:19:13.426 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.426 "prchk_reftag": false, 00:19:13.426 "prchk_guard": false, 00:19:13.426 "hdgst": false, 00:19:13.426 "ddgst": false, 00:19:13.426 "dhchap_key": "key3", 00:19:13.426 "allow_unrecognized_csi": false, 00:19:13.426 "method": "bdev_nvme_attach_controller", 00:19:13.426 "req_id": 1 00:19:13.426 } 00:19:13.426 Got JSON-RPC error response 00:19:13.426 response: 00:19:13.426 { 00:19:13.426 "code": -5, 00:19:13.426 "message": "Input/output error" 00:19:13.426 } 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.426 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:13.688 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.688 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.688 11:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.688 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:13.949 request: 00:19:13.949 { 00:19:13.949 "name": "nvme0", 00:19:13.949 "trtype": "tcp", 00:19:13.949 "traddr": "10.0.0.2", 00:19:13.949 "adrfam": "ipv4", 00:19:13.949 "trsvcid": "4420", 00:19:13.949 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:13.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:13.949 "prchk_reftag": false, 00:19:13.949 "prchk_guard": false, 00:19:13.949 "hdgst": false, 00:19:13.949 "ddgst": false, 00:19:13.949 "dhchap_key": "key0", 00:19:13.949 "dhchap_ctrlr_key": "key1", 00:19:13.949 "allow_unrecognized_csi": false, 00:19:13.949 "method": "bdev_nvme_attach_controller", 00:19:13.949 "req_id": 1 00:19:13.949 } 00:19:13.949 Got JSON-RPC error response 00:19:13.949 response: 00:19:13.949 { 00:19:13.949 "code": -5, 00:19:13.949 "message": "Input/output error" 00:19:13.949 } 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:13.949 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:19:14.210 nvme0n1 00:19:14.210 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:19:14.210 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:19:14.210 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:14.471 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:14.472 11:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:15.413 nvme0n1 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.413 11:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:19:15.675 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.675 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:19:15.675 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: --dhchap-ctrl-secret DHHC-1:03:ZjliZTBmOGI2ZDFlNWQ3OGViMWI2M2YyMzc4NjE4YWFlM2ZmYzJmMDRmMzBiNzhkOGQ3MzcyZjIwMjgzOWI4ZOA4O6g=: 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.246 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:16.507 11:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:19:17.080 request: 00:19:17.080 { 00:19:17.080 "name": "nvme0", 00:19:17.080 "trtype": "tcp", 00:19:17.080 "traddr": "10.0.0.2", 00:19:17.080 "adrfam": "ipv4", 00:19:17.080 "trsvcid": "4420", 00:19:17.080 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:17.080 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:19:17.080 "prchk_reftag": false, 00:19:17.080 "prchk_guard": false, 00:19:17.080 "hdgst": false, 00:19:17.080 "ddgst": false, 00:19:17.080 "dhchap_key": "key1", 00:19:17.080 "allow_unrecognized_csi": false, 00:19:17.080 "method": "bdev_nvme_attach_controller", 00:19:17.080 "req_id": 1 00:19:17.080 } 00:19:17.080 Got JSON-RPC error response 00:19:17.080 response: 00:19:17.080 { 00:19:17.080 "code": -5, 00:19:17.080 "message": "Input/output error" 00:19:17.080 } 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.080 11:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:17.651 nvme0n1 00:19:17.651 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:19:17.651 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:19:17.651 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.912 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.912 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.912 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:18.173 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:19:18.434 nvme0n1 00:19:18.434 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:19:18.434 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:19:18.434 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.434 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.434 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.434 11:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: '' 2s 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: ]] 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZWVjNDg2OGFkMTk5ZjNmMTQ2MWY2MGIyYmJkZWY3MWKp5kEY: 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:18.695 11:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:20.612 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:19:20.612 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:20.612 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:20.612 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: 2s 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: ]] 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NWI1YjRlNzk4NjY0ODUyYTE1MWRkZTQ4ZDg1MjlmMTUxMWJjYzI5YWM2ODM1YjBmgXNbCQ==: 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:19:20.872 11:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:22.786 11:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:23.728 nvme0n1 00:19:23.728 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:23.728 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.728 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.728 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.728 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:23.728 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:23.989 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:19:23.989 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:19:23.989 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:19:24.249 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:19:24.510 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:19:24.510 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:19:24.510 11:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:24.770 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:19:25.031 request: 00:19:25.031 { 00:19:25.031 "name": "nvme0", 00:19:25.031 "dhchap_key": "key1", 00:19:25.031 "dhchap_ctrlr_key": "key3", 00:19:25.031 "method": "bdev_nvme_set_keys", 00:19:25.031 "req_id": 1 00:19:25.031 } 00:19:25.031 Got JSON-RPC error response 00:19:25.031 response: 00:19:25.031 { 00:19:25.031 "code": -13, 00:19:25.031 "message": "Permission denied" 00:19:25.031 } 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:25.031 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.291 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:19:25.291 11:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:19:26.234 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:19:26.234 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:19:26.234 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:26.495 11:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:19:27.437 nvme0n1 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.437 11:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:19:27.697 request: 00:19:27.697 { 00:19:27.697 "name": "nvme0", 00:19:27.697 "dhchap_key": "key2", 00:19:27.697 "dhchap_ctrlr_key": "key0", 00:19:27.697 "method": "bdev_nvme_set_keys", 00:19:27.697 "req_id": 1 00:19:27.697 } 00:19:27.697 Got JSON-RPC error response 00:19:27.697 response: 00:19:27.697 { 00:19:27.697 "code": -13, 00:19:27.697 "message": "Permission denied" 00:19:27.697 } 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:27.697 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.957 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:19:27.957 11:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:28.900 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:28.900 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:28.900 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1045271 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1045271 ']' 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1045271 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1045271 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1045271' 00:19:29.160 killing process with pid 1045271 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1045271 00:19:29.160 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1045271 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:29.420 rmmod nvme_tcp 00:19:29.420 rmmod nvme_fabrics 00:19:29.420 rmmod nvme_keyring 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1071558 ']' 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1071558 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 1071558 ']' 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 1071558 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1071558 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1071558' 00:19:29.420 killing process with pid 1071558 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 1071558 00:19:29.420 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 1071558 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:29.680 11:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.J3I /tmp/spdk.key-sha256.dcu /tmp/spdk.key-sha384.m21 /tmp/spdk.key-sha512.SFx /tmp/spdk.key-sha512.7LP /tmp/spdk.key-sha384.LcL /tmp/spdk.key-sha256.dCm '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:31.594 00:19:31.594 real 2m37.177s 00:19:31.594 user 5m53.471s 00:19:31.594 sys 0m24.762s 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.594 ************************************ 00:19:31.594 END TEST nvmf_auth_target 00:19:31.594 ************************************ 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:31.594 11:42:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:31.855 ************************************ 00:19:31.855 START TEST nvmf_bdevio_no_huge 00:19:31.855 ************************************ 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:31.855 * Looking for test storage... 00:19:31.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:31.855 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.856 --rc genhtml_branch_coverage=1 00:19:31.856 --rc genhtml_function_coverage=1 00:19:31.856 --rc genhtml_legend=1 00:19:31.856 --rc geninfo_all_blocks=1 00:19:31.856 --rc geninfo_unexecuted_blocks=1 00:19:31.856 00:19:31.856 ' 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.856 --rc genhtml_branch_coverage=1 00:19:31.856 --rc genhtml_function_coverage=1 00:19:31.856 --rc genhtml_legend=1 00:19:31.856 --rc geninfo_all_blocks=1 00:19:31.856 --rc geninfo_unexecuted_blocks=1 00:19:31.856 00:19:31.856 ' 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.856 --rc genhtml_branch_coverage=1 00:19:31.856 --rc genhtml_function_coverage=1 00:19:31.856 --rc genhtml_legend=1 00:19:31.856 --rc geninfo_all_blocks=1 00:19:31.856 --rc geninfo_unexecuted_blocks=1 00:19:31.856 00:19:31.856 ' 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:31.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:31.856 --rc genhtml_branch_coverage=1 00:19:31.856 --rc genhtml_function_coverage=1 00:19:31.856 --rc genhtml_legend=1 00:19:31.856 --rc geninfo_all_blocks=1 00:19:31.856 --rc geninfo_unexecuted_blocks=1 00:19:31.856 00:19:31.856 ' 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:31.856 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.117 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:32.118 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:32.118 11:42:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.265 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:40.265 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:40.265 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:40.265 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:40.266 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:40.266 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:40.266 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:40.266 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:40.266 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.266 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:19:40.266 00:19:40.266 --- 10.0.0.2 ping statistics --- 00:19:40.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.266 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:40.266 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.266 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:19:40.266 00:19:40.266 --- 10.0.0.1 ping statistics --- 00:19:40.266 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.266 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.266 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1079757 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1079757 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 1079757 ']' 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:40.267 11:43:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.267 [2024-11-15 11:43:04.952970] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:40.267 [2024-11-15 11:43:04.953037] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:40.267 [2024-11-15 11:43:05.060528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:40.267 [2024-11-15 11:43:05.120900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.267 [2024-11-15 11:43:05.120947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.267 [2024-11-15 11:43:05.120956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.267 [2024-11-15 11:43:05.120963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.267 [2024-11-15 11:43:05.120970] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.267 [2024-11-15 11:43:05.122529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:40.267 [2024-11-15 11:43:05.122689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:40.267 [2024-11-15 11:43:05.122846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:40.267 [2024-11-15 11:43:05.122847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.528 [2024-11-15 11:43:05.833927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.528 Malloc0 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:40.528 [2024-11-15 11:43:05.888051] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:40.528 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:40.529 { 00:19:40.529 "params": { 00:19:40.529 "name": "Nvme$subsystem", 00:19:40.529 "trtype": "$TEST_TRANSPORT", 00:19:40.529 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.529 "adrfam": "ipv4", 00:19:40.529 "trsvcid": "$NVMF_PORT", 00:19:40.529 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.529 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.529 "hdgst": ${hdgst:-false}, 00:19:40.529 "ddgst": ${ddgst:-false} 00:19:40.529 }, 00:19:40.529 "method": "bdev_nvme_attach_controller" 00:19:40.529 } 00:19:40.529 EOF 00:19:40.529 )") 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:40.529 11:43:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:40.529 "params": { 00:19:40.529 "name": "Nvme1", 00:19:40.529 "trtype": "tcp", 00:19:40.529 "traddr": "10.0.0.2", 00:19:40.529 "adrfam": "ipv4", 00:19:40.529 "trsvcid": "4420", 00:19:40.529 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.529 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.529 "hdgst": false, 00:19:40.529 "ddgst": false 00:19:40.529 }, 00:19:40.529 "method": "bdev_nvme_attach_controller" 00:19:40.529 }' 00:19:40.529 [2024-11-15 11:43:05.945432] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:40.529 [2024-11-15 11:43:05.945510] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1080061 ] 00:19:40.790 [2024-11-15 11:43:06.043896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:40.790 [2024-11-15 11:43:06.107035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.790 [2024-11-15 11:43:06.107196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.790 [2024-11-15 11:43:06.107196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.050 I/O targets: 00:19:41.050 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:41.050 00:19:41.050 00:19:41.050 CUnit - A unit testing framework for C - Version 2.1-3 00:19:41.050 http://cunit.sourceforge.net/ 00:19:41.050 00:19:41.050 00:19:41.050 Suite: bdevio tests on: Nvme1n1 00:19:41.050 Test: blockdev write read block ...passed 00:19:41.050 Test: blockdev write zeroes read block ...passed 00:19:41.311 Test: blockdev write zeroes read no split ...passed 00:19:41.311 Test: blockdev write zeroes read split ...passed 00:19:41.311 Test: blockdev write zeroes read split partial ...passed 00:19:41.312 Test: blockdev reset ...[2024-11-15 11:43:06.587485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:41.312 [2024-11-15 11:43:06.587593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161e800 (9): Bad file descriptor 00:19:41.312 [2024-11-15 11:43:06.698269] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:41.312 passed 00:19:41.312 Test: blockdev write read 8 blocks ...passed 00:19:41.312 Test: blockdev write read size > 128k ...passed 00:19:41.312 Test: blockdev write read invalid size ...passed 00:19:41.312 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:41.312 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:41.312 Test: blockdev write read max offset ...passed 00:19:41.572 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:41.572 Test: blockdev writev readv 8 blocks ...passed 00:19:41.572 Test: blockdev writev readv 30 x 1block ...passed 00:19:41.572 Test: blockdev writev readv block ...passed 00:19:41.572 Test: blockdev writev readv size > 128k ...passed 00:19:41.572 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:41.572 Test: blockdev comparev and writev ...[2024-11-15 11:43:06.923973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.572 [2024-11-15 11:43:06.924012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:41.572 [2024-11-15 11:43:06.924030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.572 [2024-11-15 11:43:06.924039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:41.572 [2024-11-15 11:43:06.924474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.572 [2024-11-15 11:43:06.924491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:41.572 [2024-11-15 11:43:06.924506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.572 [2024-11-15 11:43:06.924514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:41.572 [2024-11-15 11:43:06.924946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.573 [2024-11-15 11:43:06.924959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:41.573 [2024-11-15 11:43:06.924974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.573 [2024-11-15 11:43:06.924982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:41.573 [2024-11-15 11:43:06.925473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.573 [2024-11-15 11:43:06.925485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:41.573 [2024-11-15 11:43:06.925498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:41.573 [2024-11-15 11:43:06.925506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:41.573 passed 00:19:41.573 Test: blockdev nvme passthru rw ...passed 00:19:41.573 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:43:07.008361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.573 [2024-11-15 11:43:07.008375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:41.573 [2024-11-15 11:43:07.008721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.573 [2024-11-15 11:43:07.008732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:41.573 [2024-11-15 11:43:07.009105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.573 [2024-11-15 11:43:07.009115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:41.573 [2024-11-15 11:43:07.009476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:41.573 [2024-11-15 11:43:07.009487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:41.573 passed 00:19:41.573 Test: blockdev nvme admin passthru ...passed 00:19:41.833 Test: blockdev copy ...passed 00:19:41.833 00:19:41.833 Run Summary: Type Total Ran Passed Failed Inactive 00:19:41.833 suites 1 1 n/a 0 0 00:19:41.833 tests 23 23 23 0 0 00:19:41.833 asserts 152 152 152 0 n/a 00:19:41.833 00:19:41.833 Elapsed time = 1.251 seconds 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:42.094 rmmod nvme_tcp 00:19:42.094 rmmod nvme_fabrics 00:19:42.094 rmmod nvme_keyring 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1079757 ']' 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1079757 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 1079757 ']' 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 1079757 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1079757 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1079757' 00:19:42.094 killing process with pid 1079757 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 1079757 00:19:42.094 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 1079757 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:42.355 11:43:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:44.903 00:19:44.903 real 0m12.671s 00:19:44.903 user 0m14.987s 00:19:44.903 sys 0m6.735s 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:44.903 ************************************ 00:19:44.903 END TEST nvmf_bdevio_no_huge 00:19:44.903 ************************************ 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:44.903 ************************************ 00:19:44.903 START TEST nvmf_tls 00:19:44.903 ************************************ 00:19:44.903 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:44.903 * Looking for test storage... 00:19:44.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.904 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:44.904 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:44.904 11:43:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.904 --rc genhtml_branch_coverage=1 00:19:44.904 --rc genhtml_function_coverage=1 00:19:44.904 --rc genhtml_legend=1 00:19:44.904 --rc geninfo_all_blocks=1 00:19:44.904 --rc geninfo_unexecuted_blocks=1 00:19:44.904 00:19:44.904 ' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.904 --rc genhtml_branch_coverage=1 00:19:44.904 --rc genhtml_function_coverage=1 00:19:44.904 --rc genhtml_legend=1 00:19:44.904 --rc geninfo_all_blocks=1 00:19:44.904 --rc geninfo_unexecuted_blocks=1 00:19:44.904 00:19:44.904 ' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.904 --rc genhtml_branch_coverage=1 00:19:44.904 --rc genhtml_function_coverage=1 00:19:44.904 --rc genhtml_legend=1 00:19:44.904 --rc geninfo_all_blocks=1 00:19:44.904 --rc geninfo_unexecuted_blocks=1 00:19:44.904 00:19:44.904 ' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:44.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:44.904 --rc genhtml_branch_coverage=1 00:19:44.904 --rc genhtml_function_coverage=1 00:19:44.904 --rc genhtml_legend=1 00:19:44.904 --rc geninfo_all_blocks=1 00:19:44.904 --rc geninfo_unexecuted_blocks=1 00:19:44.904 00:19:44.904 ' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:44.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.904 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:44.905 11:43:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:53.049 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:53.049 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:53.049 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:53.049 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:53.049 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:53.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:53.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:19:53.050 00:19:53.050 --- 10.0.0.2 ping statistics --- 00:19:53.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.050 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:53.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:53.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:19:53.050 00:19:53.050 --- 10.0.0.1 ping statistics --- 00:19:53.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:53.050 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1084564 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1084564 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1084564 ']' 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:53.050 11:43:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.050 [2024-11-15 11:43:17.713555] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:19:53.050 [2024-11-15 11:43:17.713630] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.050 [2024-11-15 11:43:17.814497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.050 [2024-11-15 11:43:17.865614] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.050 [2024-11-15 11:43:17.865665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.050 [2024-11-15 11:43:17.865674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.050 [2024-11-15 11:43:17.865681] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.050 [2024-11-15 11:43:17.865687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.050 [2024-11-15 11:43:17.866479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.050 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.050 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:53.050 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.050 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.050 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.311 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.311 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:53.311 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:53.311 true 00:19:53.311 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.311 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:53.572 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:53.572 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:53.572 11:43:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:53.833 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:53.833 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:54.095 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:54.095 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:54.095 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:54.095 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.095 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:54.356 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:54.356 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:54.356 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.356 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:54.616 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:54.616 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:54.616 11:43:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:54.878 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:54.878 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:54.878 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:54.878 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:54.878 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:55.139 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:55.139 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.GdkkZjzBeO 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.yTLblbpUFJ 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.GdkkZjzBeO 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.yTLblbpUFJ 00:19:55.400 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:55.661 11:43:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:55.923 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.GdkkZjzBeO 00:19:55.923 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.GdkkZjzBeO 00:19:55.923 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:55.923 [2024-11-15 11:43:21.372972] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.923 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:56.184 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:56.445 [2024-11-15 11:43:21.689748] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:56.445 [2024-11-15 11:43:21.689950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:56.445 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:56.445 malloc0 00:19:56.445 11:43:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:56.706 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.GdkkZjzBeO 00:19:56.967 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:56.968 11:43:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.GdkkZjzBeO 00:20:09.197 Initializing NVMe Controllers 00:20:09.197 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:09.197 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:09.197 Initialization complete. Launching workers. 00:20:09.197 ======================================================== 00:20:09.197 Latency(us) 00:20:09.197 Device Information : IOPS MiB/s Average min max 00:20:09.197 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18627.99 72.77 3435.92 1064.52 4080.45 00:20:09.197 ======================================================== 00:20:09.197 Total : 18627.99 72.77 3435.92 1064.52 4080.45 00:20:09.197 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdkkZjzBeO 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GdkkZjzBeO 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1087468 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1087468 /var/tmp/bdevperf.sock 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1087468 ']' 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:09.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:09.197 11:43:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.197 [2024-11-15 11:43:32.543228] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:09.197 [2024-11-15 11:43:32.543282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1087468 ] 00:20:09.197 [2024-11-15 11:43:32.631512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.197 [2024-11-15 11:43:32.666915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.197 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.197 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:09.197 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GdkkZjzBeO 00:20:09.197 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:09.197 [2024-11-15 11:43:33.635536] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:09.197 TLSTESTn1 00:20:09.197 11:43:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:09.197 Running I/O for 10 seconds... 00:20:10.399 4155.00 IOPS, 16.23 MiB/s [2024-11-15T10:43:36.838Z] 4847.00 IOPS, 18.93 MiB/s [2024-11-15T10:43:38.220Z] 4998.00 IOPS, 19.52 MiB/s [2024-11-15T10:43:39.160Z] 5088.75 IOPS, 19.88 MiB/s [2024-11-15T10:43:40.101Z] 4985.80 IOPS, 19.48 MiB/s [2024-11-15T10:43:41.042Z] 5119.67 IOPS, 20.00 MiB/s [2024-11-15T10:43:41.985Z] 5208.29 IOPS, 20.34 MiB/s [2024-11-15T10:43:42.925Z] 5322.50 IOPS, 20.79 MiB/s [2024-11-15T10:43:43.865Z] 5320.33 IOPS, 20.78 MiB/s [2024-11-15T10:43:43.865Z] 5427.80 IOPS, 21.20 MiB/s 00:20:18.367 Latency(us) 00:20:18.367 [2024-11-15T10:43:43.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.367 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.367 Verification LBA range: start 0x0 length 0x2000 00:20:18.367 TLSTESTn1 : 10.01 5434.13 21.23 0.00 0.00 23521.35 4751.36 26869.76 00:20:18.367 [2024-11-15T10:43:43.865Z] =================================================================================================================== 00:20:18.367 [2024-11-15T10:43:43.865Z] Total : 5434.13 21.23 0.00 0.00 23521.35 4751.36 26869.76 00:20:18.367 { 00:20:18.367 "results": [ 00:20:18.367 { 00:20:18.367 "job": "TLSTESTn1", 00:20:18.367 "core_mask": "0x4", 00:20:18.367 "workload": "verify", 00:20:18.367 "status": "finished", 00:20:18.367 "verify_range": { 00:20:18.367 "start": 0, 00:20:18.367 "length": 8192 00:20:18.368 }, 00:20:18.368 "queue_depth": 128, 00:20:18.368 "io_size": 4096, 00:20:18.368 "runtime": 10.011722, 00:20:18.368 "iops": 5434.130112681914, 00:20:18.368 "mibps": 21.227070752663728, 00:20:18.368 "io_failed": 0, 00:20:18.368 "io_timeout": 0, 00:20:18.368 "avg_latency_us": 23521.34681738811, 00:20:18.368 "min_latency_us": 4751.36, 00:20:18.368 "max_latency_us": 26869.76 00:20:18.368 } 00:20:18.368 ], 00:20:18.368 "core_count": 1 00:20:18.368 } 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1087468 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1087468 ']' 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1087468 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1087468 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1087468' 00:20:18.628 killing process with pid 1087468 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1087468 00:20:18.628 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.628 00:20:18.628 Latency(us) 00:20:18.628 [2024-11-15T10:43:44.126Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.628 [2024-11-15T10:43:44.126Z] =================================================================================================================== 00:20:18.628 [2024-11-15T10:43:44.126Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.628 11:43:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1087468 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLblbpUFJ 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLblbpUFJ 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yTLblbpUFJ 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yTLblbpUFJ 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1089804 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1089804 /var/tmp/bdevperf.sock 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1089804 ']' 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:18.628 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:18.628 [2024-11-15 11:43:44.071409] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:18.628 [2024-11-15 11:43:44.071452] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089804 ] 00:20:18.628 [2024-11-15 11:43:44.121083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.890 [2024-11-15 11:43:44.149510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.890 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:18.890 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:18.890 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yTLblbpUFJ 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:19.150 [2024-11-15 11:43:44.575728] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.150 [2024-11-15 11:43:44.584688] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.150 [2024-11-15 11:43:44.584742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5bb0 (107): Transport endpoint is not connected 00:20:19.150 [2024-11-15 11:43:44.585713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f5bb0 (9): Bad file descriptor 00:20:19.150 [2024-11-15 11:43:44.586716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:19.150 [2024-11-15 11:43:44.586723] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.150 [2024-11-15 11:43:44.586728] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:19.150 [2024-11-15 11:43:44.586736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:19.150 request: 00:20:19.150 { 00:20:19.150 "name": "TLSTEST", 00:20:19.150 "trtype": "tcp", 00:20:19.150 "traddr": "10.0.0.2", 00:20:19.150 "adrfam": "ipv4", 00:20:19.150 "trsvcid": "4420", 00:20:19.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.150 "prchk_reftag": false, 00:20:19.150 "prchk_guard": false, 00:20:19.150 "hdgst": false, 00:20:19.150 "ddgst": false, 00:20:19.150 "psk": "key0", 00:20:19.150 "allow_unrecognized_csi": false, 00:20:19.150 "method": "bdev_nvme_attach_controller", 00:20:19.150 "req_id": 1 00:20:19.150 } 00:20:19.150 Got JSON-RPC error response 00:20:19.150 response: 00:20:19.150 { 00:20:19.150 "code": -5, 00:20:19.150 "message": "Input/output error" 00:20:19.150 } 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1089804 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1089804 ']' 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1089804 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:19.150 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1089804 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1089804' 00:20:19.410 killing process with pid 1089804 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1089804 00:20:19.410 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.410 00:20:19.410 Latency(us) 00:20:19.410 [2024-11-15T10:43:44.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.410 [2024-11-15T10:43:44.908Z] =================================================================================================================== 00:20:19.410 [2024-11-15T10:43:44.908Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1089804 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GdkkZjzBeO 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GdkkZjzBeO 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.GdkkZjzBeO 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GdkkZjzBeO 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1089824 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.410 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1089824 /var/tmp/bdevperf.sock 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1089824 ']' 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:19.411 11:43:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:19.411 [2024-11-15 11:43:44.830884] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:19.411 [2024-11-15 11:43:44.830941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089824 ] 00:20:19.671 [2024-11-15 11:43:44.914238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.671 [2024-11-15 11:43:44.941690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:20.248 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:20.248 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:20.248 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GdkkZjzBeO 00:20:20.514 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:20:20.514 [2024-11-15 11:43:45.961071] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:20.515 [2024-11-15 11:43:45.967962] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:20.515 [2024-11-15 11:43:45.967980] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:20.515 [2024-11-15 11:43:45.968004] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:20.515 [2024-11-15 11:43:45.968339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92bb0 (107): Transport endpoint is not connected 00:20:20.515 [2024-11-15 11:43:45.969334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a92bb0 (9): Bad file descriptor 00:20:20.515 [2024-11-15 11:43:45.970336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:20:20.515 [2024-11-15 11:43:45.970343] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:20.515 [2024-11-15 11:43:45.970348] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:20:20.515 [2024-11-15 11:43:45.970356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:20:20.515 request: 00:20:20.515 { 00:20:20.515 "name": "TLSTEST", 00:20:20.515 "trtype": "tcp", 00:20:20.515 "traddr": "10.0.0.2", 00:20:20.515 "adrfam": "ipv4", 00:20:20.515 "trsvcid": "4420", 00:20:20.515 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:20.515 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:20.515 "prchk_reftag": false, 00:20:20.515 "prchk_guard": false, 00:20:20.515 "hdgst": false, 00:20:20.515 "ddgst": false, 00:20:20.515 "psk": "key0", 00:20:20.515 "allow_unrecognized_csi": false, 00:20:20.515 "method": "bdev_nvme_attach_controller", 00:20:20.515 "req_id": 1 00:20:20.515 } 00:20:20.515 Got JSON-RPC error response 00:20:20.515 response: 00:20:20.515 { 00:20:20.515 "code": -5, 00:20:20.515 "message": "Input/output error" 00:20:20.515 } 00:20:20.515 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1089824 00:20:20.515 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1089824 ']' 00:20:20.515 11:43:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1089824 00:20:20.515 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:20.515 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:20.515 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1089824 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1089824' 00:20:20.776 killing process with pid 1089824 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1089824 00:20:20.776 Received shutdown signal, test time was about 10.000000 seconds 00:20:20.776 00:20:20.776 Latency(us) 00:20:20.776 [2024-11-15T10:43:46.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:20.776 [2024-11-15T10:43:46.274Z] =================================================================================================================== 00:20:20.776 [2024-11-15T10:43:46.274Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1089824 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdkkZjzBeO 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdkkZjzBeO 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.GdkkZjzBeO 00:20:20.776 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.GdkkZjzBeO 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1090163 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1090163 /var/tmp/bdevperf.sock 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1090163 ']' 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:20.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:20.777 11:43:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:20.777 [2024-11-15 11:43:46.211244] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:20.777 [2024-11-15 11:43:46.211300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090163 ] 00:20:21.037 [2024-11-15 11:43:46.297840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.037 [2024-11-15 11:43:46.325301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.608 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:21.608 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:21.608 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.GdkkZjzBeO 00:20:21.867 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:21.867 [2024-11-15 11:43:47.344762] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.867 [2024-11-15 11:43:47.351333] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:21.867 [2024-11-15 11:43:47.351354] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:21.867 [2024-11-15 11:43:47.351374] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.867 [2024-11-15 11:43:47.351987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248cbb0 (107): Transport endpoint is not connected 00:20:21.867 [2024-11-15 11:43:47.352982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248cbb0 (9): Bad file descriptor 00:20:21.867 [2024-11-15 11:43:47.353984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:20:21.867 [2024-11-15 11:43:47.353991] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.867 [2024-11-15 11:43:47.353997] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:20:21.867 [2024-11-15 11:43:47.354004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:20:21.867 request: 00:20:21.867 { 00:20:21.867 "name": "TLSTEST", 00:20:21.867 "trtype": "tcp", 00:20:21.867 "traddr": "10.0.0.2", 00:20:21.867 "adrfam": "ipv4", 00:20:21.867 "trsvcid": "4420", 00:20:21.867 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.867 "prchk_reftag": false, 00:20:21.867 "prchk_guard": false, 00:20:21.867 "hdgst": false, 00:20:21.867 "ddgst": false, 00:20:21.867 "psk": "key0", 00:20:21.867 "allow_unrecognized_csi": false, 00:20:21.867 "method": "bdev_nvme_attach_controller", 00:20:21.867 "req_id": 1 00:20:21.867 } 00:20:21.867 Got JSON-RPC error response 00:20:21.867 response: 00:20:21.867 { 00:20:21.867 "code": -5, 00:20:21.867 "message": "Input/output error" 00:20:21.867 } 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1090163 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1090163 ']' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1090163 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1090163 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1090163' 00:20:22.128 killing process with pid 1090163 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1090163 00:20:22.128 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.128 00:20:22.128 Latency(us) 00:20:22.128 [2024-11-15T10:43:47.626Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.128 [2024-11-15T10:43:47.626Z] =================================================================================================================== 00:20:22.128 [2024-11-15T10:43:47.626Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1090163 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1090503 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1090503 /var/tmp/bdevperf.sock 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1090503 ']' 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:22.128 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:22.128 [2024-11-15 11:43:47.598996] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:22.128 [2024-11-15 11:43:47.599052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1090503 ] 00:20:22.389 [2024-11-15 11:43:47.658687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.389 [2024-11-15 11:43:47.686483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.389 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:22.389 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:22.389 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:20:22.650 [2024-11-15 11:43:47.924069] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:20:22.650 [2024-11-15 11:43:47.924096] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:22.650 request: 00:20:22.650 { 00:20:22.650 "name": "key0", 00:20:22.650 "path": "", 00:20:22.650 "method": "keyring_file_add_key", 00:20:22.650 "req_id": 1 00:20:22.650 } 00:20:22.650 Got JSON-RPC error response 00:20:22.650 response: 00:20:22.650 { 00:20:22.650 "code": -1, 00:20:22.650 "message": "Operation not permitted" 00:20:22.650 } 00:20:22.650 11:43:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:22.650 [2024-11-15 11:43:48.100592] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:22.650 [2024-11-15 11:43:48.100614] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:22.650 request: 00:20:22.650 { 00:20:22.650 "name": "TLSTEST", 00:20:22.650 "trtype": "tcp", 00:20:22.650 "traddr": "10.0.0.2", 00:20:22.650 "adrfam": "ipv4", 00:20:22.650 "trsvcid": "4420", 00:20:22.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.650 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.650 "prchk_reftag": false, 00:20:22.650 "prchk_guard": false, 00:20:22.650 "hdgst": false, 00:20:22.650 "ddgst": false, 00:20:22.650 "psk": "key0", 00:20:22.650 "allow_unrecognized_csi": false, 00:20:22.650 "method": "bdev_nvme_attach_controller", 00:20:22.650 "req_id": 1 00:20:22.650 } 00:20:22.650 Got JSON-RPC error response 00:20:22.650 response: 00:20:22.650 { 00:20:22.650 "code": -126, 00:20:22.650 "message": "Required key not available" 00:20:22.650 } 00:20:22.650 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1090503 00:20:22.650 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1090503 ']' 00:20:22.650 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1090503 00:20:22.650 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:22.650 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.650 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1090503 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1090503' 00:20:22.910 killing process with pid 1090503 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1090503 00:20:22.910 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.910 00:20:22.910 Latency(us) 00:20:22.910 [2024-11-15T10:43:48.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.910 [2024-11-15T10:43:48.408Z] =================================================================================================================== 00:20:22.910 [2024-11-15T10:43:48.408Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1090503 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1084564 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1084564 ']' 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1084564 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1084564 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1084564' 00:20:22.910 killing process with pid 1084564 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1084564 00:20:22.910 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1084564 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.KoZLvk7oI9 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.KoZLvk7oI9 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1090546 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1090546 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1090546 ']' 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:23.171 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.172 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:23.172 11:43:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:23.172 [2024-11-15 11:43:48.564490] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:23.172 [2024-11-15 11:43:48.564554] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.172 [2024-11-15 11:43:48.658181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.432 [2024-11-15 11:43:48.696545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.432 [2024-11-15 11:43:48.696593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.432 [2024-11-15 11:43:48.696599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:23.432 [2024-11-15 11:43:48.696608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:23.432 [2024-11-15 11:43:48.696613] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.432 [2024-11-15 11:43:48.697205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.KoZLvk7oI9 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoZLvk7oI9 00:20:24.005 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:24.265 [2024-11-15 11:43:49.539320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.265 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:24.265 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:24.525 [2024-11-15 11:43:49.904217] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:24.525 [2024-11-15 11:43:49.904415] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:24.525 11:43:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:24.785 malloc0 00:20:24.785 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.046 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:25.046 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoZLvk7oI9 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KoZLvk7oI9 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1091089 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1091089 /var/tmp/bdevperf.sock 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1091089 ']' 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:25.306 11:43:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.306 [2024-11-15 11:43:50.700827] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:25.306 [2024-11-15 11:43:50.700881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091089 ] 00:20:25.306 [2024-11-15 11:43:50.782892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.567 [2024-11-15 11:43:50.812908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.146 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:26.146 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:26.146 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:26.420 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.420 [2024-11-15 11:43:51.828955] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:26.420 TLSTESTn1 00:20:26.699 11:43:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:26.699 Running I/O for 10 seconds... 00:20:28.673 5010.00 IOPS, 19.57 MiB/s [2024-11-15T10:43:55.115Z] 5028.50 IOPS, 19.64 MiB/s [2024-11-15T10:43:56.057Z] 5129.67 IOPS, 20.04 MiB/s [2024-11-15T10:43:57.444Z] 5260.25 IOPS, 20.55 MiB/s [2024-11-15T10:43:58.387Z] 5353.40 IOPS, 20.91 MiB/s [2024-11-15T10:43:59.330Z] 5351.50 IOPS, 20.90 MiB/s [2024-11-15T10:44:00.272Z] 5333.14 IOPS, 20.83 MiB/s [2024-11-15T10:44:01.214Z] 5305.62 IOPS, 20.73 MiB/s [2024-11-15T10:44:02.157Z] 5318.22 IOPS, 20.77 MiB/s [2024-11-15T10:44:02.157Z] 5336.70 IOPS, 20.85 MiB/s 00:20:36.659 Latency(us) 00:20:36.659 [2024-11-15T10:44:02.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.659 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:36.659 Verification LBA range: start 0x0 length 0x2000 00:20:36.659 TLSTESTn1 : 10.02 5337.03 20.85 0.00 0.00 23945.17 5488.64 71652.69 00:20:36.659 [2024-11-15T10:44:02.158Z] =================================================================================================================== 00:20:36.660 [2024-11-15T10:44:02.158Z] Total : 5337.03 20.85 0.00 0.00 23945.17 5488.64 71652.69 00:20:36.660 { 00:20:36.660 "results": [ 00:20:36.660 { 00:20:36.660 "job": "TLSTESTn1", 00:20:36.660 "core_mask": "0x4", 00:20:36.660 "workload": "verify", 00:20:36.660 "status": "finished", 00:20:36.660 "verify_range": { 00:20:36.660 "start": 0, 00:20:36.660 "length": 8192 00:20:36.660 }, 00:20:36.660 "queue_depth": 128, 00:20:36.660 "io_size": 4096, 00:20:36.660 "runtime": 10.023361, 00:20:36.660 "iops": 5337.032159172956, 00:20:36.660 "mibps": 20.84778187176936, 00:20:36.660 "io_failed": 0, 00:20:36.660 "io_timeout": 0, 00:20:36.660 "avg_latency_us": 23945.169548805185, 00:20:36.660 "min_latency_us": 5488.64, 00:20:36.660 "max_latency_us": 71652.69333333333 00:20:36.660 } 00:20:36.660 ], 00:20:36.660 "core_count": 1 00:20:36.660 } 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1091089 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1091089 ']' 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1091089 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.660 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1091089 00:20:36.921 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1091089' 00:20:36.922 killing process with pid 1091089 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1091089 00:20:36.922 Received shutdown signal, test time was about 10.000000 seconds 00:20:36.922 00:20:36.922 Latency(us) 00:20:36.922 [2024-11-15T10:44:02.420Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.922 [2024-11-15T10:44:02.420Z] =================================================================================================================== 00:20:36.922 [2024-11-15T10:44:02.420Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1091089 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.KoZLvk7oI9 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoZLvk7oI9 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoZLvk7oI9 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KoZLvk7oI9 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.KoZLvk7oI9 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1093260 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1093260 /var/tmp/bdevperf.sock 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1093260 ']' 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:36.922 11:44:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:36.922 [2024-11-15 11:44:02.325916] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:36.922 [2024-11-15 11:44:02.325973] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093260 ] 00:20:36.922 [2024-11-15 11:44:02.410023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.184 [2024-11-15 11:44:02.438410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.756 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:37.756 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:37.756 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:38.017 [2024-11-15 11:44:03.257435] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KoZLvk7oI9': 0100666 00:20:38.017 [2024-11-15 11:44:03.257456] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:38.017 request: 00:20:38.017 { 00:20:38.017 "name": "key0", 00:20:38.017 "path": "/tmp/tmp.KoZLvk7oI9", 00:20:38.017 "method": "keyring_file_add_key", 00:20:38.017 "req_id": 1 00:20:38.017 } 00:20:38.017 Got JSON-RPC error response 00:20:38.017 response: 00:20:38.017 { 00:20:38.017 "code": -1, 00:20:38.017 "message": "Operation not permitted" 00:20:38.017 } 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:38.017 [2024-11-15 11:44:03.441967] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.017 [2024-11-15 11:44:03.441988] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:38.017 request: 00:20:38.017 { 00:20:38.017 "name": "TLSTEST", 00:20:38.017 "trtype": "tcp", 00:20:38.017 "traddr": "10.0.0.2", 00:20:38.017 "adrfam": "ipv4", 00:20:38.017 "trsvcid": "4420", 00:20:38.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.017 "prchk_reftag": false, 00:20:38.017 "prchk_guard": false, 00:20:38.017 "hdgst": false, 00:20:38.017 "ddgst": false, 00:20:38.017 "psk": "key0", 00:20:38.017 "allow_unrecognized_csi": false, 00:20:38.017 "method": "bdev_nvme_attach_controller", 00:20:38.017 "req_id": 1 00:20:38.017 } 00:20:38.017 Got JSON-RPC error response 00:20:38.017 response: 00:20:38.017 { 00:20:38.017 "code": -126, 00:20:38.017 "message": "Required key not available" 00:20:38.017 } 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1093260 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1093260 ']' 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1093260 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.017 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1093260 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1093260' 00:20:38.278 killing process with pid 1093260 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1093260 00:20:38.278 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.278 00:20:38.278 Latency(us) 00:20:38.278 [2024-11-15T10:44:03.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.278 [2024-11-15T10:44:03.776Z] =================================================================================================================== 00:20:38.278 [2024-11-15T10:44:03.776Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1093260 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1090546 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1090546 ']' 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1090546 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1090546 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1090546' 00:20:38.278 killing process with pid 1090546 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1090546 00:20:38.278 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1090546 00:20:38.539 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:38.539 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.539 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.539 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1093602 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1093602 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1093602 ']' 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:38.540 11:44:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:38.540 [2024-11-15 11:44:03.877299] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:38.540 [2024-11-15 11:44:03.877360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.540 [2024-11-15 11:44:03.965863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.540 [2024-11-15 11:44:03.994709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.540 [2024-11-15 11:44:03.994738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.540 [2024-11-15 11:44:03.994744] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.540 [2024-11-15 11:44:03.994748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.540 [2024-11-15 11:44:03.994752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.540 [2024-11-15 11:44:03.995217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.481 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:39.481 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:39.481 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:39.481 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:39.481 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.KoZLvk7oI9 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.KoZLvk7oI9 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.KoZLvk7oI9 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoZLvk7oI9 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:39.482 [2024-11-15 11:44:04.867833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.482 11:44:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:39.743 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:39.743 [2024-11-15 11:44:05.228723] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:39.743 [2024-11-15 11:44:05.228923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.004 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:40.004 malloc0 00:20:40.004 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:40.264 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:40.264 [2024-11-15 11:44:05.751641] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.KoZLvk7oI9': 0100666 00:20:40.264 [2024-11-15 11:44:05.751660] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:40.264 request: 00:20:40.264 { 00:20:40.264 "name": "key0", 00:20:40.264 "path": "/tmp/tmp.KoZLvk7oI9", 00:20:40.264 "method": "keyring_file_add_key", 00:20:40.264 "req_id": 1 00:20:40.264 } 00:20:40.264 Got JSON-RPC error response 00:20:40.264 response: 00:20:40.264 { 00:20:40.264 "code": -1, 00:20:40.264 "message": "Operation not permitted" 00:20:40.264 } 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:40.526 [2024-11-15 11:44:05.924091] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:40.526 [2024-11-15 11:44:05.924116] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:40.526 request: 00:20:40.526 { 00:20:40.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.526 "host": "nqn.2016-06.io.spdk:host1", 00:20:40.526 "psk": "key0", 00:20:40.526 "method": "nvmf_subsystem_add_host", 00:20:40.526 "req_id": 1 00:20:40.526 } 00:20:40.526 Got JSON-RPC error response 00:20:40.526 response: 00:20:40.526 { 00:20:40.526 "code": -32603, 00:20:40.526 "message": "Internal error" 00:20:40.526 } 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1093602 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1093602 ']' 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1093602 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:40.526 11:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1093602 00:20:40.787 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:40.787 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:40.787 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1093602' 00:20:40.788 killing process with pid 1093602 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1093602 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1093602 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.KoZLvk7oI9 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1094124 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1094124 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1094124 ']' 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:40.788 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:40.788 [2024-11-15 11:44:06.191950] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:40.788 [2024-11-15 11:44:06.192007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:41.049 [2024-11-15 11:44:06.285283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:41.049 [2024-11-15 11:44:06.318592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:41.049 [2024-11-15 11:44:06.318626] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:41.049 [2024-11-15 11:44:06.318632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:41.049 [2024-11-15 11:44:06.318636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:41.049 [2024-11-15 11:44:06.318640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:41.049 [2024-11-15 11:44:06.319115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.630 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:41.630 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:41.630 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:41.630 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:41.630 11:44:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:41.630 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.630 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.KoZLvk7oI9 00:20:41.630 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoZLvk7oI9 00:20:41.630 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:41.895 [2024-11-15 11:44:07.194260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.895 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:41.895 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:42.156 [2024-11-15 11:44:07.515038] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:42.156 [2024-11-15 11:44:07.515244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.156 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:42.417 malloc0 00:20:42.417 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:42.417 11:44:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:42.678 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1094639 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1094639 /var/tmp/bdevperf.sock 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1094639 ']' 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:42.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:42.939 11:44:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:42.939 [2024-11-15 11:44:08.241419] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:42.939 [2024-11-15 11:44:08.241473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094639 ] 00:20:42.939 [2024-11-15 11:44:08.322814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.939 [2024-11-15 11:44:08.352211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.881 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:43.881 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:43.881 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:43.881 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:43.881 [2024-11-15 11:44:09.335626] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.142 TLSTESTn1 00:20:44.142 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:44.403 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:44.403 "subsystems": [ 00:20:44.403 { 00:20:44.403 "subsystem": "keyring", 00:20:44.403 "config": [ 00:20:44.403 { 00:20:44.403 "method": "keyring_file_add_key", 00:20:44.403 "params": { 00:20:44.403 "name": "key0", 00:20:44.403 "path": "/tmp/tmp.KoZLvk7oI9" 00:20:44.403 } 00:20:44.403 } 00:20:44.403 ] 00:20:44.403 }, 00:20:44.403 { 00:20:44.403 "subsystem": "iobuf", 00:20:44.403 "config": [ 00:20:44.403 { 00:20:44.403 "method": "iobuf_set_options", 00:20:44.403 "params": { 00:20:44.403 "small_pool_count": 8192, 00:20:44.403 "large_pool_count": 1024, 00:20:44.403 "small_bufsize": 8192, 00:20:44.403 "large_bufsize": 135168, 00:20:44.403 "enable_numa": false 00:20:44.403 } 00:20:44.403 } 00:20:44.403 ] 00:20:44.403 }, 00:20:44.403 { 00:20:44.403 "subsystem": "sock", 00:20:44.403 "config": [ 00:20:44.403 { 00:20:44.403 "method": "sock_set_default_impl", 00:20:44.403 "params": { 00:20:44.403 "impl_name": "posix" 00:20:44.403 } 00:20:44.403 }, 00:20:44.403 { 00:20:44.403 "method": "sock_impl_set_options", 00:20:44.403 "params": { 00:20:44.403 "impl_name": "ssl", 00:20:44.403 "recv_buf_size": 4096, 00:20:44.403 "send_buf_size": 4096, 00:20:44.403 "enable_recv_pipe": true, 00:20:44.403 "enable_quickack": false, 00:20:44.404 "enable_placement_id": 0, 00:20:44.404 "enable_zerocopy_send_server": true, 00:20:44.404 "enable_zerocopy_send_client": false, 00:20:44.404 "zerocopy_threshold": 0, 00:20:44.404 "tls_version": 0, 00:20:44.404 "enable_ktls": false 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "sock_impl_set_options", 00:20:44.404 "params": { 00:20:44.404 "impl_name": "posix", 00:20:44.404 "recv_buf_size": 2097152, 00:20:44.404 "send_buf_size": 2097152, 00:20:44.404 "enable_recv_pipe": true, 00:20:44.404 "enable_quickack": false, 00:20:44.404 "enable_placement_id": 0, 00:20:44.404 "enable_zerocopy_send_server": true, 00:20:44.404 "enable_zerocopy_send_client": false, 00:20:44.404 "zerocopy_threshold": 0, 00:20:44.404 "tls_version": 0, 00:20:44.404 "enable_ktls": false 00:20:44.404 } 00:20:44.404 } 00:20:44.404 ] 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "subsystem": "vmd", 00:20:44.404 "config": [] 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "subsystem": "accel", 00:20:44.404 "config": [ 00:20:44.404 { 00:20:44.404 "method": "accel_set_options", 00:20:44.404 "params": { 00:20:44.404 "small_cache_size": 128, 00:20:44.404 "large_cache_size": 16, 00:20:44.404 "task_count": 2048, 00:20:44.404 "sequence_count": 2048, 00:20:44.404 "buf_count": 2048 00:20:44.404 } 00:20:44.404 } 00:20:44.404 ] 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "subsystem": "bdev", 00:20:44.404 "config": [ 00:20:44.404 { 00:20:44.404 "method": "bdev_set_options", 00:20:44.404 "params": { 00:20:44.404 "bdev_io_pool_size": 65535, 00:20:44.404 "bdev_io_cache_size": 256, 00:20:44.404 "bdev_auto_examine": true, 00:20:44.404 "iobuf_small_cache_size": 128, 00:20:44.404 "iobuf_large_cache_size": 16 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "bdev_raid_set_options", 00:20:44.404 "params": { 00:20:44.404 "process_window_size_kb": 1024, 00:20:44.404 "process_max_bandwidth_mb_sec": 0 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "bdev_iscsi_set_options", 00:20:44.404 "params": { 00:20:44.404 "timeout_sec": 30 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "bdev_nvme_set_options", 00:20:44.404 "params": { 00:20:44.404 "action_on_timeout": "none", 00:20:44.404 "timeout_us": 0, 00:20:44.404 "timeout_admin_us": 0, 00:20:44.404 "keep_alive_timeout_ms": 10000, 00:20:44.404 "arbitration_burst": 0, 00:20:44.404 "low_priority_weight": 0, 00:20:44.404 "medium_priority_weight": 0, 00:20:44.404 "high_priority_weight": 0, 00:20:44.404 "nvme_adminq_poll_period_us": 10000, 00:20:44.404 "nvme_ioq_poll_period_us": 0, 00:20:44.404 "io_queue_requests": 0, 00:20:44.404 "delay_cmd_submit": true, 00:20:44.404 "transport_retry_count": 4, 00:20:44.404 "bdev_retry_count": 3, 00:20:44.404 "transport_ack_timeout": 0, 00:20:44.404 "ctrlr_loss_timeout_sec": 0, 00:20:44.404 "reconnect_delay_sec": 0, 00:20:44.404 "fast_io_fail_timeout_sec": 0, 00:20:44.404 "disable_auto_failback": false, 00:20:44.404 "generate_uuids": false, 00:20:44.404 "transport_tos": 0, 00:20:44.404 "nvme_error_stat": false, 00:20:44.404 "rdma_srq_size": 0, 00:20:44.404 "io_path_stat": false, 00:20:44.404 "allow_accel_sequence": false, 00:20:44.404 "rdma_max_cq_size": 0, 00:20:44.404 "rdma_cm_event_timeout_ms": 0, 00:20:44.404 "dhchap_digests": [ 00:20:44.404 "sha256", 00:20:44.404 "sha384", 00:20:44.404 "sha512" 00:20:44.404 ], 00:20:44.404 "dhchap_dhgroups": [ 00:20:44.404 "null", 00:20:44.404 "ffdhe2048", 00:20:44.404 "ffdhe3072", 00:20:44.404 "ffdhe4096", 00:20:44.404 "ffdhe6144", 00:20:44.404 "ffdhe8192" 00:20:44.404 ] 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "bdev_nvme_set_hotplug", 00:20:44.404 "params": { 00:20:44.404 "period_us": 100000, 00:20:44.404 "enable": false 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "bdev_malloc_create", 00:20:44.404 "params": { 00:20:44.404 "name": "malloc0", 00:20:44.404 "num_blocks": 8192, 00:20:44.404 "block_size": 4096, 00:20:44.404 "physical_block_size": 4096, 00:20:44.404 "uuid": "50e45c06-7c97-44c2-987f-8028c7699bc2", 00:20:44.404 "optimal_io_boundary": 0, 00:20:44.404 "md_size": 0, 00:20:44.404 "dif_type": 0, 00:20:44.404 "dif_is_head_of_md": false, 00:20:44.404 "dif_pi_format": 0 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "bdev_wait_for_examine" 00:20:44.404 } 00:20:44.404 ] 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "subsystem": "nbd", 00:20:44.404 "config": [] 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "subsystem": "scheduler", 00:20:44.404 "config": [ 00:20:44.404 { 00:20:44.404 "method": "framework_set_scheduler", 00:20:44.404 "params": { 00:20:44.404 "name": "static" 00:20:44.404 } 00:20:44.404 } 00:20:44.404 ] 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "subsystem": "nvmf", 00:20:44.404 "config": [ 00:20:44.404 { 00:20:44.404 "method": "nvmf_set_config", 00:20:44.404 "params": { 00:20:44.404 "discovery_filter": "match_any", 00:20:44.404 "admin_cmd_passthru": { 00:20:44.404 "identify_ctrlr": false 00:20:44.404 }, 00:20:44.404 "dhchap_digests": [ 00:20:44.404 "sha256", 00:20:44.404 "sha384", 00:20:44.404 "sha512" 00:20:44.404 ], 00:20:44.404 "dhchap_dhgroups": [ 00:20:44.404 "null", 00:20:44.404 "ffdhe2048", 00:20:44.404 "ffdhe3072", 00:20:44.404 "ffdhe4096", 00:20:44.404 "ffdhe6144", 00:20:44.404 "ffdhe8192" 00:20:44.404 ] 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "nvmf_set_max_subsystems", 00:20:44.404 "params": { 00:20:44.404 "max_subsystems": 1024 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "nvmf_set_crdt", 00:20:44.404 "params": { 00:20:44.404 "crdt1": 0, 00:20:44.404 "crdt2": 0, 00:20:44.404 "crdt3": 0 00:20:44.404 } 00:20:44.404 }, 00:20:44.404 { 00:20:44.404 "method": "nvmf_create_transport", 00:20:44.404 "params": { 00:20:44.404 "trtype": "TCP", 00:20:44.404 "max_queue_depth": 128, 00:20:44.404 "max_io_qpairs_per_ctrlr": 127, 00:20:44.404 "in_capsule_data_size": 4096, 00:20:44.404 "max_io_size": 131072, 00:20:44.405 "io_unit_size": 131072, 00:20:44.405 "max_aq_depth": 128, 00:20:44.405 "num_shared_buffers": 511, 00:20:44.405 "buf_cache_size": 4294967295, 00:20:44.405 "dif_insert_or_strip": false, 00:20:44.405 "zcopy": false, 00:20:44.405 "c2h_success": false, 00:20:44.405 "sock_priority": 0, 00:20:44.405 "abort_timeout_sec": 1, 00:20:44.405 "ack_timeout": 0, 00:20:44.405 "data_wr_pool_size": 0 00:20:44.405 } 00:20:44.405 }, 00:20:44.405 { 00:20:44.405 "method": "nvmf_create_subsystem", 00:20:44.405 "params": { 00:20:44.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.405 "allow_any_host": false, 00:20:44.405 "serial_number": "SPDK00000000000001", 00:20:44.405 "model_number": "SPDK bdev Controller", 00:20:44.405 "max_namespaces": 10, 00:20:44.405 "min_cntlid": 1, 00:20:44.405 "max_cntlid": 65519, 00:20:44.405 "ana_reporting": false 00:20:44.405 } 00:20:44.405 }, 00:20:44.405 { 00:20:44.405 "method": "nvmf_subsystem_add_host", 00:20:44.405 "params": { 00:20:44.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.405 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.405 "psk": "key0" 00:20:44.405 } 00:20:44.405 }, 00:20:44.405 { 00:20:44.405 "method": "nvmf_subsystem_add_ns", 00:20:44.405 "params": { 00:20:44.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.405 "namespace": { 00:20:44.405 "nsid": 1, 00:20:44.405 "bdev_name": "malloc0", 00:20:44.405 "nguid": "50E45C067C9744C2987F8028C7699BC2", 00:20:44.405 "uuid": "50e45c06-7c97-44c2-987f-8028c7699bc2", 00:20:44.405 "no_auto_visible": false 00:20:44.405 } 00:20:44.405 } 00:20:44.405 }, 00:20:44.405 { 00:20:44.405 "method": "nvmf_subsystem_add_listener", 00:20:44.405 "params": { 00:20:44.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.405 "listen_address": { 00:20:44.405 "trtype": "TCP", 00:20:44.405 "adrfam": "IPv4", 00:20:44.405 "traddr": "10.0.0.2", 00:20:44.405 "trsvcid": "4420" 00:20:44.405 }, 00:20:44.405 "secure_channel": true 00:20:44.405 } 00:20:44.405 } 00:20:44.405 ] 00:20:44.405 } 00:20:44.405 ] 00:20:44.405 }' 00:20:44.405 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:44.666 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:44.666 "subsystems": [ 00:20:44.666 { 00:20:44.666 "subsystem": "keyring", 00:20:44.666 "config": [ 00:20:44.666 { 00:20:44.666 "method": "keyring_file_add_key", 00:20:44.666 "params": { 00:20:44.666 "name": "key0", 00:20:44.666 "path": "/tmp/tmp.KoZLvk7oI9" 00:20:44.666 } 00:20:44.666 } 00:20:44.666 ] 00:20:44.666 }, 00:20:44.666 { 00:20:44.666 "subsystem": "iobuf", 00:20:44.666 "config": [ 00:20:44.666 { 00:20:44.666 "method": "iobuf_set_options", 00:20:44.666 "params": { 00:20:44.666 "small_pool_count": 8192, 00:20:44.666 "large_pool_count": 1024, 00:20:44.666 "small_bufsize": 8192, 00:20:44.666 "large_bufsize": 135168, 00:20:44.666 "enable_numa": false 00:20:44.666 } 00:20:44.666 } 00:20:44.666 ] 00:20:44.666 }, 00:20:44.666 { 00:20:44.666 "subsystem": "sock", 00:20:44.666 "config": [ 00:20:44.666 { 00:20:44.666 "method": "sock_set_default_impl", 00:20:44.666 "params": { 00:20:44.666 "impl_name": "posix" 00:20:44.666 } 00:20:44.666 }, 00:20:44.666 { 00:20:44.666 "method": "sock_impl_set_options", 00:20:44.666 "params": { 00:20:44.666 "impl_name": "ssl", 00:20:44.666 "recv_buf_size": 4096, 00:20:44.666 "send_buf_size": 4096, 00:20:44.666 "enable_recv_pipe": true, 00:20:44.666 "enable_quickack": false, 00:20:44.666 "enable_placement_id": 0, 00:20:44.666 "enable_zerocopy_send_server": true, 00:20:44.666 "enable_zerocopy_send_client": false, 00:20:44.666 "zerocopy_threshold": 0, 00:20:44.666 "tls_version": 0, 00:20:44.666 "enable_ktls": false 00:20:44.666 } 00:20:44.666 }, 00:20:44.666 { 00:20:44.666 "method": "sock_impl_set_options", 00:20:44.666 "params": { 00:20:44.666 "impl_name": "posix", 00:20:44.666 "recv_buf_size": 2097152, 00:20:44.666 "send_buf_size": 2097152, 00:20:44.666 "enable_recv_pipe": true, 00:20:44.666 "enable_quickack": false, 00:20:44.666 "enable_placement_id": 0, 00:20:44.666 "enable_zerocopy_send_server": true, 00:20:44.667 "enable_zerocopy_send_client": false, 00:20:44.667 "zerocopy_threshold": 0, 00:20:44.667 "tls_version": 0, 00:20:44.667 "enable_ktls": false 00:20:44.667 } 00:20:44.667 } 00:20:44.667 ] 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "subsystem": "vmd", 00:20:44.667 "config": [] 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "subsystem": "accel", 00:20:44.667 "config": [ 00:20:44.667 { 00:20:44.667 "method": "accel_set_options", 00:20:44.667 "params": { 00:20:44.667 "small_cache_size": 128, 00:20:44.667 "large_cache_size": 16, 00:20:44.667 "task_count": 2048, 00:20:44.667 "sequence_count": 2048, 00:20:44.667 "buf_count": 2048 00:20:44.667 } 00:20:44.667 } 00:20:44.667 ] 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "subsystem": "bdev", 00:20:44.667 "config": [ 00:20:44.667 { 00:20:44.667 "method": "bdev_set_options", 00:20:44.667 "params": { 00:20:44.667 "bdev_io_pool_size": 65535, 00:20:44.667 "bdev_io_cache_size": 256, 00:20:44.667 "bdev_auto_examine": true, 00:20:44.667 "iobuf_small_cache_size": 128, 00:20:44.667 "iobuf_large_cache_size": 16 00:20:44.667 } 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "method": "bdev_raid_set_options", 00:20:44.667 "params": { 00:20:44.667 "process_window_size_kb": 1024, 00:20:44.667 "process_max_bandwidth_mb_sec": 0 00:20:44.667 } 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "method": "bdev_iscsi_set_options", 00:20:44.667 "params": { 00:20:44.667 "timeout_sec": 30 00:20:44.667 } 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "method": "bdev_nvme_set_options", 00:20:44.667 "params": { 00:20:44.667 "action_on_timeout": "none", 00:20:44.667 "timeout_us": 0, 00:20:44.667 "timeout_admin_us": 0, 00:20:44.667 "keep_alive_timeout_ms": 10000, 00:20:44.667 "arbitration_burst": 0, 00:20:44.667 "low_priority_weight": 0, 00:20:44.667 "medium_priority_weight": 0, 00:20:44.667 "high_priority_weight": 0, 00:20:44.667 "nvme_adminq_poll_period_us": 10000, 00:20:44.667 "nvme_ioq_poll_period_us": 0, 00:20:44.667 "io_queue_requests": 512, 00:20:44.667 "delay_cmd_submit": true, 00:20:44.667 "transport_retry_count": 4, 00:20:44.667 "bdev_retry_count": 3, 00:20:44.667 "transport_ack_timeout": 0, 00:20:44.667 "ctrlr_loss_timeout_sec": 0, 00:20:44.667 "reconnect_delay_sec": 0, 00:20:44.667 "fast_io_fail_timeout_sec": 0, 00:20:44.667 "disable_auto_failback": false, 00:20:44.667 "generate_uuids": false, 00:20:44.667 "transport_tos": 0, 00:20:44.667 "nvme_error_stat": false, 00:20:44.667 "rdma_srq_size": 0, 00:20:44.667 "io_path_stat": false, 00:20:44.667 "allow_accel_sequence": false, 00:20:44.667 "rdma_max_cq_size": 0, 00:20:44.667 "rdma_cm_event_timeout_ms": 0, 00:20:44.667 "dhchap_digests": [ 00:20:44.667 "sha256", 00:20:44.667 "sha384", 00:20:44.667 "sha512" 00:20:44.667 ], 00:20:44.667 "dhchap_dhgroups": [ 00:20:44.667 "null", 00:20:44.667 "ffdhe2048", 00:20:44.667 "ffdhe3072", 00:20:44.667 "ffdhe4096", 00:20:44.667 "ffdhe6144", 00:20:44.667 "ffdhe8192" 00:20:44.667 ] 00:20:44.667 } 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "method": "bdev_nvme_attach_controller", 00:20:44.667 "params": { 00:20:44.667 "name": "TLSTEST", 00:20:44.667 "trtype": "TCP", 00:20:44.667 "adrfam": "IPv4", 00:20:44.667 "traddr": "10.0.0.2", 00:20:44.667 "trsvcid": "4420", 00:20:44.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.667 "prchk_reftag": false, 00:20:44.667 "prchk_guard": false, 00:20:44.667 "ctrlr_loss_timeout_sec": 0, 00:20:44.667 "reconnect_delay_sec": 0, 00:20:44.667 "fast_io_fail_timeout_sec": 0, 00:20:44.667 "psk": "key0", 00:20:44.667 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.667 "hdgst": false, 00:20:44.667 "ddgst": false, 00:20:44.667 "multipath": "multipath" 00:20:44.667 } 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "method": "bdev_nvme_set_hotplug", 00:20:44.667 "params": { 00:20:44.667 "period_us": 100000, 00:20:44.667 "enable": false 00:20:44.667 } 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "method": "bdev_wait_for_examine" 00:20:44.667 } 00:20:44.667 ] 00:20:44.667 }, 00:20:44.667 { 00:20:44.667 "subsystem": "nbd", 00:20:44.667 "config": [] 00:20:44.667 } 00:20:44.667 ] 00:20:44.667 }' 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1094639 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1094639 ']' 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1094639 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1094639 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1094639' 00:20:44.667 killing process with pid 1094639 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1094639 00:20:44.667 Received shutdown signal, test time was about 10.000000 seconds 00:20:44.667 00:20:44.667 Latency(us) 00:20:44.667 [2024-11-15T10:44:10.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:44.667 [2024-11-15T10:44:10.165Z] =================================================================================================================== 00:20:44.667 [2024-11-15T10:44:10.165Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:44.667 11:44:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1094639 00:20:44.667 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1094124 00:20:44.667 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1094124 ']' 00:20:44.667 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1094124 00:20:44.667 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:44.667 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:44.667 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1094124 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1094124' 00:20:44.955 killing process with pid 1094124 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1094124 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1094124 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.955 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:44.955 "subsystems": [ 00:20:44.955 { 00:20:44.955 "subsystem": "keyring", 00:20:44.955 "config": [ 00:20:44.955 { 00:20:44.955 "method": "keyring_file_add_key", 00:20:44.955 "params": { 00:20:44.955 "name": "key0", 00:20:44.955 "path": "/tmp/tmp.KoZLvk7oI9" 00:20:44.955 } 00:20:44.955 } 00:20:44.955 ] 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "subsystem": "iobuf", 00:20:44.955 "config": [ 00:20:44.955 { 00:20:44.955 "method": "iobuf_set_options", 00:20:44.955 "params": { 00:20:44.955 "small_pool_count": 8192, 00:20:44.955 "large_pool_count": 1024, 00:20:44.955 "small_bufsize": 8192, 00:20:44.955 "large_bufsize": 135168, 00:20:44.955 "enable_numa": false 00:20:44.955 } 00:20:44.955 } 00:20:44.955 ] 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "subsystem": "sock", 00:20:44.955 "config": [ 00:20:44.955 { 00:20:44.955 "method": "sock_set_default_impl", 00:20:44.955 "params": { 00:20:44.955 "impl_name": "posix" 00:20:44.955 } 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "method": "sock_impl_set_options", 00:20:44.955 "params": { 00:20:44.955 "impl_name": "ssl", 00:20:44.955 "recv_buf_size": 4096, 00:20:44.955 "send_buf_size": 4096, 00:20:44.955 "enable_recv_pipe": true, 00:20:44.955 "enable_quickack": false, 00:20:44.955 "enable_placement_id": 0, 00:20:44.955 "enable_zerocopy_send_server": true, 00:20:44.955 "enable_zerocopy_send_client": false, 00:20:44.955 "zerocopy_threshold": 0, 00:20:44.955 "tls_version": 0, 00:20:44.955 "enable_ktls": false 00:20:44.955 } 00:20:44.955 }, 00:20:44.955 { 00:20:44.955 "method": "sock_impl_set_options", 00:20:44.955 "params": { 00:20:44.956 "impl_name": "posix", 00:20:44.956 "recv_buf_size": 2097152, 00:20:44.956 "send_buf_size": 2097152, 00:20:44.956 "enable_recv_pipe": true, 00:20:44.956 "enable_quickack": false, 00:20:44.956 "enable_placement_id": 0, 00:20:44.956 "enable_zerocopy_send_server": true, 00:20:44.956 "enable_zerocopy_send_client": false, 00:20:44.956 "zerocopy_threshold": 0, 00:20:44.956 "tls_version": 0, 00:20:44.956 "enable_ktls": false 00:20:44.956 } 00:20:44.956 } 00:20:44.956 ] 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "subsystem": "vmd", 00:20:44.956 "config": [] 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "subsystem": "accel", 00:20:44.956 "config": [ 00:20:44.956 { 00:20:44.956 "method": "accel_set_options", 00:20:44.956 "params": { 00:20:44.956 "small_cache_size": 128, 00:20:44.956 "large_cache_size": 16, 00:20:44.956 "task_count": 2048, 00:20:44.956 "sequence_count": 2048, 00:20:44.956 "buf_count": 2048 00:20:44.956 } 00:20:44.956 } 00:20:44.956 ] 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "subsystem": "bdev", 00:20:44.956 "config": [ 00:20:44.956 { 00:20:44.956 "method": "bdev_set_options", 00:20:44.956 "params": { 00:20:44.956 "bdev_io_pool_size": 65535, 00:20:44.956 "bdev_io_cache_size": 256, 00:20:44.956 "bdev_auto_examine": true, 00:20:44.956 "iobuf_small_cache_size": 128, 00:20:44.956 "iobuf_large_cache_size": 16 00:20:44.956 } 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "method": "bdev_raid_set_options", 00:20:44.956 "params": { 00:20:44.956 "process_window_size_kb": 1024, 00:20:44.956 "process_max_bandwidth_mb_sec": 0 00:20:44.956 } 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "method": "bdev_iscsi_set_options", 00:20:44.956 "params": { 00:20:44.956 "timeout_sec": 30 00:20:44.956 } 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "method": "bdev_nvme_set_options", 00:20:44.956 "params": { 00:20:44.956 "action_on_timeout": "none", 00:20:44.956 "timeout_us": 0, 00:20:44.956 "timeout_admin_us": 0, 00:20:44.956 "keep_alive_timeout_ms": 10000, 00:20:44.956 "arbitration_burst": 0, 00:20:44.956 "low_priority_weight": 0, 00:20:44.956 "medium_priority_weight": 0, 00:20:44.956 "high_priority_weight": 0, 00:20:44.956 "nvme_adminq_poll_period_us": 10000, 00:20:44.956 "nvme_ioq_poll_period_us": 0, 00:20:44.956 "io_queue_requests": 0, 00:20:44.956 "delay_cmd_submit": true, 00:20:44.956 "transport_retry_count": 4, 00:20:44.956 "bdev_retry_count": 3, 00:20:44.956 "transport_ack_timeout": 0, 00:20:44.956 "ctrlr_loss_timeout_sec": 0, 00:20:44.956 "reconnect_delay_sec": 0, 00:20:44.956 "fast_io_fail_timeout_sec": 0, 00:20:44.956 "disable_auto_failback": false, 00:20:44.956 "generate_uuids": false, 00:20:44.956 "transport_tos": 0, 00:20:44.956 "nvme_error_stat": false, 00:20:44.956 "rdma_srq_size": 0, 00:20:44.956 "io_path_stat": false, 00:20:44.956 "allow_accel_sequence": false, 00:20:44.956 "rdma_max_cq_size": 0, 00:20:44.956 "rdma_cm_event_timeout_ms": 0, 00:20:44.956 "dhchap_digests": [ 00:20:44.956 "sha256", 00:20:44.956 "sha384", 00:20:44.956 "sha512" 00:20:44.956 ], 00:20:44.956 "dhchap_dhgroups": [ 00:20:44.956 "null", 00:20:44.956 "ffdhe2048", 00:20:44.956 "ffdhe3072", 00:20:44.956 "ffdhe4096", 00:20:44.956 "ffdhe6144", 00:20:44.956 "ffdhe8192" 00:20:44.956 ] 00:20:44.956 } 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "method": "bdev_nvme_set_hotplug", 00:20:44.956 "params": { 00:20:44.956 "period_us": 100000, 00:20:44.956 "enable": false 00:20:44.956 } 00:20:44.956 }, 00:20:44.956 { 00:20:44.956 "method": "bdev_malloc_create", 00:20:44.956 "params": { 00:20:44.956 "name": "malloc0", 00:20:44.957 "num_blocks": 8192, 00:20:44.957 "block_size": 4096, 00:20:44.957 "physical_block_size": 4096, 00:20:44.957 "uuid": "50e45c06-7c97-44c2-987f-8028c7699bc2", 00:20:44.957 "optimal_io_boundary": 0, 00:20:44.957 "md_size": 0, 00:20:44.957 "dif_type": 0, 00:20:44.957 "dif_is_head_of_md": false, 00:20:44.957 "dif_pi_format": 0 00:20:44.957 } 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "method": "bdev_wait_for_examine" 00:20:44.957 } 00:20:44.957 ] 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "subsystem": "nbd", 00:20:44.957 "config": [] 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "subsystem": "scheduler", 00:20:44.957 "config": [ 00:20:44.957 { 00:20:44.957 "method": "framework_set_scheduler", 00:20:44.957 "params": { 00:20:44.957 "name": "static" 00:20:44.957 } 00:20:44.957 } 00:20:44.957 ] 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "subsystem": "nvmf", 00:20:44.957 "config": [ 00:20:44.957 { 00:20:44.957 "method": "nvmf_set_config", 00:20:44.957 "params": { 00:20:44.957 "discovery_filter": "match_any", 00:20:44.957 "admin_cmd_passthru": { 00:20:44.957 "identify_ctrlr": false 00:20:44.957 }, 00:20:44.957 "dhchap_digests": [ 00:20:44.957 "sha256", 00:20:44.957 "sha384", 00:20:44.957 "sha512" 00:20:44.957 ], 00:20:44.957 "dhchap_dhgroups": [ 00:20:44.957 "null", 00:20:44.957 "ffdhe2048", 00:20:44.957 "ffdhe3072", 00:20:44.957 "ffdhe4096", 00:20:44.957 "ffdhe6144", 00:20:44.957 "ffdhe8192" 00:20:44.957 ] 00:20:44.957 } 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "method": "nvmf_set_max_subsystems", 00:20:44.957 "params": { 00:20:44.957 "max_subsystems": 1024 00:20:44.957 } 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "method": "nvmf_set_crdt", 00:20:44.957 "params": { 00:20:44.957 "crdt1": 0, 00:20:44.957 "crdt2": 0, 00:20:44.957 "crdt3": 0 00:20:44.957 } 00:20:44.957 }, 00:20:44.957 { 00:20:44.957 "method": "nvmf_create_transport", 00:20:44.957 "params": { 00:20:44.958 "trtype": "TCP", 00:20:44.958 "max_queue_depth": 128, 00:20:44.958 "max_io_qpairs_per_ctrlr": 127, 00:20:44.958 "in_capsule_data_size": 4096, 00:20:44.958 "max_io_size": 131072, 00:20:44.958 "io_unit_size": 131072, 00:20:44.958 "max_aq_depth": 128, 00:20:44.958 "num_shared_buffers": 511, 00:20:44.958 "buf_cache_size": 4294967295, 00:20:44.958 "dif_insert_or_strip": false, 00:20:44.958 "zcopy": false, 00:20:44.958 "c2h_success": false, 00:20:44.958 "sock_priority": 0, 00:20:44.958 "abort_timeout_sec": 1, 00:20:44.958 "ack_timeout": 0, 00:20:44.958 "data_wr_pool_size": 0 00:20:44.958 } 00:20:44.958 }, 00:20:44.958 { 00:20:44.958 "method": "nvmf_create_subsystem", 00:20:44.958 "params": { 00:20:44.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.958 "allow_any_host": false, 00:20:44.958 "serial_number": "SPDK00000000000001", 00:20:44.958 "model_number": "SPDK bdev Controller", 00:20:44.958 "max_namespaces": 10, 00:20:44.958 "min_cntlid": 1, 00:20:44.958 "max_cntlid": 65519, 00:20:44.958 "ana_reporting": false 00:20:44.958 } 00:20:44.958 }, 00:20:44.958 { 00:20:44.958 "method": "nvmf_subsystem_add_host", 00:20:44.958 "params": { 00:20:44.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.958 "host": "nqn.2016-06.io.spdk:host1", 00:20:44.958 "psk": "key0" 00:20:44.958 } 00:20:44.958 }, 00:20:44.958 { 00:20:44.958 "method": "nvmf_subsystem_add_ns", 00:20:44.958 "params": { 00:20:44.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.958 "namespace": { 00:20:44.958 "nsid": 1, 00:20:44.958 "bdev_name": "malloc0", 00:20:44.958 "nguid": "50E45C067C9744C2987F8028C7699BC2", 00:20:44.958 "uuid": "50e45c06-7c97-44c2-987f-8028c7699bc2", 00:20:44.958 "no_auto_visible": false 00:20:44.958 } 00:20:44.958 } 00:20:44.958 }, 00:20:44.958 { 00:20:44.958 "method": "nvmf_subsystem_add_listener", 00:20:44.958 "params": { 00:20:44.958 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.958 "listen_address": { 00:20:44.958 "trtype": "TCP", 00:20:44.958 "adrfam": "IPv4", 00:20:44.958 "traddr": "10.0.0.2", 00:20:44.958 "trsvcid": "4420" 00:20:44.958 }, 00:20:44.958 "secure_channel": true 00:20:44.958 } 00:20:44.958 } 00:20:44.958 ] 00:20:44.958 } 00:20:44.958 ] 00:20:44.958 }' 00:20:44.958 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1095015 00:20:44.958 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1095015 00:20:44.958 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:44.958 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1095015 ']' 00:20:44.958 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.960 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:44.960 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.960 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:44.960 11:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:44.960 [2024-11-15 11:44:10.334194] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:44.960 [2024-11-15 11:44:10.334251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.960 [2024-11-15 11:44:10.424344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.220 [2024-11-15 11:44:10.457234] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:45.220 [2024-11-15 11:44:10.457261] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:45.220 [2024-11-15 11:44:10.457267] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:45.220 [2024-11-15 11:44:10.457272] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:45.220 [2024-11-15 11:44:10.457276] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:45.220 [2024-11-15 11:44:10.457800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.220 [2024-11-15 11:44:10.651167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:45.220 [2024-11-15 11:44:10.683188] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:45.220 [2024-11-15 11:44:10.683389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1095123 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1095123 /var/tmp/bdevperf.sock 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1095123 ']' 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:45.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:45.791 11:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:45.791 "subsystems": [ 00:20:45.791 { 00:20:45.791 "subsystem": "keyring", 00:20:45.791 "config": [ 00:20:45.791 { 00:20:45.791 "method": "keyring_file_add_key", 00:20:45.791 "params": { 00:20:45.791 "name": "key0", 00:20:45.791 "path": "/tmp/tmp.KoZLvk7oI9" 00:20:45.791 } 00:20:45.791 } 00:20:45.791 ] 00:20:45.791 }, 00:20:45.791 { 00:20:45.791 "subsystem": "iobuf", 00:20:45.791 "config": [ 00:20:45.791 { 00:20:45.791 "method": "iobuf_set_options", 00:20:45.791 "params": { 00:20:45.791 "small_pool_count": 8192, 00:20:45.791 "large_pool_count": 1024, 00:20:45.791 "small_bufsize": 8192, 00:20:45.791 "large_bufsize": 135168, 00:20:45.791 "enable_numa": false 00:20:45.791 } 00:20:45.791 } 00:20:45.791 ] 00:20:45.791 }, 00:20:45.791 { 00:20:45.791 "subsystem": "sock", 00:20:45.791 "config": [ 00:20:45.791 { 00:20:45.791 "method": "sock_set_default_impl", 00:20:45.791 "params": { 00:20:45.791 "impl_name": "posix" 00:20:45.791 } 00:20:45.791 }, 00:20:45.791 { 00:20:45.791 "method": "sock_impl_set_options", 00:20:45.791 "params": { 00:20:45.791 "impl_name": "ssl", 00:20:45.791 "recv_buf_size": 4096, 00:20:45.791 "send_buf_size": 4096, 00:20:45.791 "enable_recv_pipe": true, 00:20:45.791 "enable_quickack": false, 00:20:45.791 "enable_placement_id": 0, 00:20:45.791 "enable_zerocopy_send_server": true, 00:20:45.791 "enable_zerocopy_send_client": false, 00:20:45.791 "zerocopy_threshold": 0, 00:20:45.791 "tls_version": 0, 00:20:45.791 "enable_ktls": false 00:20:45.791 } 00:20:45.791 }, 00:20:45.791 { 00:20:45.791 "method": "sock_impl_set_options", 00:20:45.791 "params": { 00:20:45.791 "impl_name": "posix", 00:20:45.791 "recv_buf_size": 2097152, 00:20:45.791 "send_buf_size": 2097152, 00:20:45.791 "enable_recv_pipe": true, 00:20:45.791 "enable_quickack": false, 00:20:45.791 "enable_placement_id": 0, 00:20:45.791 "enable_zerocopy_send_server": true, 00:20:45.791 "enable_zerocopy_send_client": false, 00:20:45.791 "zerocopy_threshold": 0, 00:20:45.791 "tls_version": 0, 00:20:45.791 "enable_ktls": false 00:20:45.791 } 00:20:45.791 } 00:20:45.791 ] 00:20:45.791 }, 00:20:45.791 { 00:20:45.791 "subsystem": "vmd", 00:20:45.791 "config": [] 00:20:45.791 }, 00:20:45.791 { 00:20:45.791 "subsystem": "accel", 00:20:45.791 "config": [ 00:20:45.791 { 00:20:45.791 "method": "accel_set_options", 00:20:45.791 "params": { 00:20:45.791 "small_cache_size": 128, 00:20:45.791 "large_cache_size": 16, 00:20:45.791 "task_count": 2048, 00:20:45.792 "sequence_count": 2048, 00:20:45.792 "buf_count": 2048 00:20:45.792 } 00:20:45.792 } 00:20:45.792 ] 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "subsystem": "bdev", 00:20:45.792 "config": [ 00:20:45.792 { 00:20:45.792 "method": "bdev_set_options", 00:20:45.792 "params": { 00:20:45.792 "bdev_io_pool_size": 65535, 00:20:45.792 "bdev_io_cache_size": 256, 00:20:45.792 "bdev_auto_examine": true, 00:20:45.792 "iobuf_small_cache_size": 128, 00:20:45.792 "iobuf_large_cache_size": 16 00:20:45.792 } 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "method": "bdev_raid_set_options", 00:20:45.792 "params": { 00:20:45.792 "process_window_size_kb": 1024, 00:20:45.792 "process_max_bandwidth_mb_sec": 0 00:20:45.792 } 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "method": "bdev_iscsi_set_options", 00:20:45.792 "params": { 00:20:45.792 "timeout_sec": 30 00:20:45.792 } 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "method": "bdev_nvme_set_options", 00:20:45.792 "params": { 00:20:45.792 "action_on_timeout": "none", 00:20:45.792 "timeout_us": 0, 00:20:45.792 "timeout_admin_us": 0, 00:20:45.792 "keep_alive_timeout_ms": 10000, 00:20:45.792 "arbitration_burst": 0, 00:20:45.792 "low_priority_weight": 0, 00:20:45.792 "medium_priority_weight": 0, 00:20:45.792 "high_priority_weight": 0, 00:20:45.792 "nvme_adminq_poll_period_us": 10000, 00:20:45.792 "nvme_ioq_poll_period_us": 0, 00:20:45.792 "io_queue_requests": 512, 00:20:45.792 "delay_cmd_submit": true, 00:20:45.792 "transport_retry_count": 4, 00:20:45.792 "bdev_retry_count": 3, 00:20:45.792 "transport_ack_timeout": 0, 00:20:45.792 "ctrlr_loss_timeout_sec": 0, 00:20:45.792 "reconnect_delay_sec": 0, 00:20:45.792 "fast_io_fail_timeout_sec": 0, 00:20:45.792 "disable_auto_failback": false, 00:20:45.792 "generate_uuids": false, 00:20:45.792 "transport_tos": 0, 00:20:45.792 "nvme_error_stat": false, 00:20:45.792 "rdma_srq_size": 0, 00:20:45.792 "io_path_stat": false, 00:20:45.792 "allow_accel_sequence": false, 00:20:45.792 "rdma_max_cq_size": 0, 00:20:45.792 "rdma_cm_event_timeout_ms": 0, 00:20:45.792 "dhchap_digests": [ 00:20:45.792 "sha256", 00:20:45.792 "sha384", 00:20:45.792 "sha512" 00:20:45.792 ], 00:20:45.792 "dhchap_dhgroups": [ 00:20:45.792 "null", 00:20:45.792 "ffdhe2048", 00:20:45.792 "ffdhe3072", 00:20:45.792 "ffdhe4096", 00:20:45.792 "ffdhe6144", 00:20:45.792 "ffdhe8192" 00:20:45.792 ] 00:20:45.792 } 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "method": "bdev_nvme_attach_controller", 00:20:45.792 "params": { 00:20:45.792 "name": "TLSTEST", 00:20:45.792 "trtype": "TCP", 00:20:45.792 "adrfam": "IPv4", 00:20:45.792 "traddr": "10.0.0.2", 00:20:45.792 "trsvcid": "4420", 00:20:45.792 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:45.792 "prchk_reftag": false, 00:20:45.792 "prchk_guard": false, 00:20:45.792 "ctrlr_loss_timeout_sec": 0, 00:20:45.792 "reconnect_delay_sec": 0, 00:20:45.792 "fast_io_fail_timeout_sec": 0, 00:20:45.792 "psk": "key0", 00:20:45.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:45.792 "hdgst": false, 00:20:45.792 "ddgst": false, 00:20:45.792 "multipath": "multipath" 00:20:45.792 } 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "method": "bdev_nvme_set_hotplug", 00:20:45.792 "params": { 00:20:45.792 "period_us": 100000, 00:20:45.792 "enable": false 00:20:45.792 } 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "method": "bdev_wait_for_examine" 00:20:45.792 } 00:20:45.792 ] 00:20:45.792 }, 00:20:45.792 { 00:20:45.792 "subsystem": "nbd", 00:20:45.792 "config": [] 00:20:45.792 } 00:20:45.792 ] 00:20:45.792 }' 00:20:45.792 [2024-11-15 11:44:11.236540] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:45.792 [2024-11-15 11:44:11.236601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095123 ] 00:20:46.052 [2024-11-15 11:44:11.321200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:46.052 [2024-11-15 11:44:11.350386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:46.052 [2024-11-15 11:44:11.485387] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.622 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:46.622 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:46.622 11:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:46.622 Running I/O for 10 seconds... 00:20:48.947 5197.00 IOPS, 20.30 MiB/s [2024-11-15T10:44:15.385Z] 5578.50 IOPS, 21.79 MiB/s [2024-11-15T10:44:16.326Z] 5892.67 IOPS, 23.02 MiB/s [2024-11-15T10:44:17.267Z] 5970.50 IOPS, 23.32 MiB/s [2024-11-15T10:44:18.207Z] 6055.00 IOPS, 23.65 MiB/s [2024-11-15T10:44:19.148Z] 6100.83 IOPS, 23.83 MiB/s [2024-11-15T10:44:20.532Z] 6090.29 IOPS, 23.79 MiB/s [2024-11-15T10:44:21.473Z] 6111.12 IOPS, 23.87 MiB/s [2024-11-15T10:44:22.414Z] 6133.00 IOPS, 23.96 MiB/s [2024-11-15T10:44:22.414Z] 6097.60 IOPS, 23.82 MiB/s 00:20:56.916 Latency(us) 00:20:56.916 [2024-11-15T10:44:22.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.916 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:56.916 Verification LBA range: start 0x0 length 0x2000 00:20:56.916 TLSTESTn1 : 10.01 6103.06 23.84 0.00 0.00 20944.04 4751.36 105294.51 00:20:56.916 [2024-11-15T10:44:22.414Z] =================================================================================================================== 00:20:56.916 [2024-11-15T10:44:22.414Z] Total : 6103.06 23.84 0.00 0.00 20944.04 4751.36 105294.51 00:20:56.916 { 00:20:56.916 "results": [ 00:20:56.916 { 00:20:56.916 "job": "TLSTESTn1", 00:20:56.916 "core_mask": "0x4", 00:20:56.916 "workload": "verify", 00:20:56.916 "status": "finished", 00:20:56.916 "verify_range": { 00:20:56.916 "start": 0, 00:20:56.916 "length": 8192 00:20:56.916 }, 00:20:56.916 "queue_depth": 128, 00:20:56.916 "io_size": 4096, 00:20:56.916 "runtime": 10.011543, 00:20:56.916 "iops": 6103.055243332621, 00:20:56.916 "mibps": 23.84005954426805, 00:20:56.916 "io_failed": 0, 00:20:56.916 "io_timeout": 0, 00:20:56.916 "avg_latency_us": 20944.03926176876, 00:20:56.916 "min_latency_us": 4751.36, 00:20:56.916 "max_latency_us": 105294.50666666667 00:20:56.916 } 00:20:56.916 ], 00:20:56.916 "core_count": 1 00:20:56.916 } 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1095123 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1095123 ']' 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1095123 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1095123 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1095123' 00:20:56.916 killing process with pid 1095123 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1095123 00:20:56.916 Received shutdown signal, test time was about 10.000000 seconds 00:20:56.916 00:20:56.916 Latency(us) 00:20:56.916 [2024-11-15T10:44:22.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:56.916 [2024-11-15T10:44:22.414Z] =================================================================================================================== 00:20:56.916 [2024-11-15T10:44:22.414Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1095123 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1095015 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1095015 ']' 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1095015 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:56.916 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1095015 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1095015' 00:20:57.177 killing process with pid 1095015 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1095015 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1095015 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1097392 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1097392 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1097392 ']' 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.177 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.177 [2024-11-15 11:44:22.584634] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:57.177 [2024-11-15 11:44:22.584690] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:57.177 [2024-11-15 11:44:22.669175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.438 [2024-11-15 11:44:22.720509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:57.438 [2024-11-15 11:44:22.720588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:57.438 [2024-11-15 11:44:22.720600] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:57.438 [2024-11-15 11:44:22.720609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:57.438 [2024-11-15 11:44:22.720617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:57.438 [2024-11-15 11:44:22.721591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.KoZLvk7oI9 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.KoZLvk7oI9 00:20:57.438 11:44:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.699 [2024-11-15 11:44:23.026437] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.699 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:57.960 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:57.960 [2024-11-15 11:44:23.419383] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.960 [2024-11-15 11:44:23.419717] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.960 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:58.221 malloc0 00:20:58.221 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:58.482 11:44:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:58.743 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1097753 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1097753 /var/tmp/bdevperf.sock 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1097753 ']' 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:59.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:59.004 [2024-11-15 11:44:24.287027] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:20:59.004 [2024-11-15 11:44:24.287099] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1097753 ] 00:20:59.004 [2024-11-15 11:44:24.374029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.004 [2024-11-15 11:44:24.407951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:59.004 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:20:59.264 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:59.523 [2024-11-15 11:44:24.825624] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.523 nvme0n1 00:20:59.523 11:44:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:59.523 Running I/O for 1 seconds... 00:21:00.905 5592.00 IOPS, 21.84 MiB/s 00:21:00.905 Latency(us) 00:21:00.905 [2024-11-15T10:44:26.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.905 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:00.905 Verification LBA range: start 0x0 length 0x2000 00:21:00.905 nvme0n1 : 1.03 5577.58 21.79 0.00 0.00 22731.06 5925.55 40195.41 00:21:00.905 [2024-11-15T10:44:26.403Z] =================================================================================================================== 00:21:00.905 [2024-11-15T10:44:26.403Z] Total : 5577.58 21.79 0.00 0.00 22731.06 5925.55 40195.41 00:21:00.905 { 00:21:00.905 "results": [ 00:21:00.905 { 00:21:00.905 "job": "nvme0n1", 00:21:00.905 "core_mask": "0x2", 00:21:00.905 "workload": "verify", 00:21:00.905 "status": "finished", 00:21:00.905 "verify_range": { 00:21:00.905 "start": 0, 00:21:00.905 "length": 8192 00:21:00.905 }, 00:21:00.905 "queue_depth": 128, 00:21:00.905 "io_size": 4096, 00:21:00.905 "runtime": 1.025714, 00:21:00.905 "iops": 5577.57815531425, 00:21:00.905 "mibps": 21.787414669196288, 00:21:00.905 "io_failed": 0, 00:21:00.905 "io_timeout": 0, 00:21:00.905 "avg_latency_us": 22731.055447182895, 00:21:00.905 "min_latency_us": 5925.546666666667, 00:21:00.905 "max_latency_us": 40195.41333333333 00:21:00.905 } 00:21:00.905 ], 00:21:00.905 "core_count": 1 00:21:00.905 } 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1097753 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1097753 ']' 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1097753 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1097753 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:00.905 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1097753' 00:21:00.905 killing process with pid 1097753 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1097753 00:21:00.906 Received shutdown signal, test time was about 1.000000 seconds 00:21:00.906 00:21:00.906 Latency(us) 00:21:00.906 [2024-11-15T10:44:26.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:00.906 [2024-11-15T10:44:26.404Z] =================================================================================================================== 00:21:00.906 [2024-11-15T10:44:26.404Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1097753 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1097392 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1097392 ']' 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1097392 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1097392 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1097392' 00:21:00.906 killing process with pid 1097392 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1097392 00:21:00.906 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1097392 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1098107 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1098107 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1098107 ']' 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:01.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:01.167 11:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:01.167 [2024-11-15 11:44:26.480814] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:01.167 [2024-11-15 11:44:26.480874] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:01.167 [2024-11-15 11:44:26.578203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.167 [2024-11-15 11:44:26.626150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:01.167 [2024-11-15 11:44:26.626210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:01.167 [2024-11-15 11:44:26.626219] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:01.167 [2024-11-15 11:44:26.626226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:01.167 [2024-11-15 11:44:26.626232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:01.167 [2024-11-15 11:44:26.627048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.109 [2024-11-15 11:44:27.355330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.109 malloc0 00:21:02.109 [2024-11-15 11:44:27.385432] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:02.109 [2024-11-15 11:44:27.385788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1098451 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1098451 /var/tmp/bdevperf.sock 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1098451 ']' 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:02.109 11:44:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.109 [2024-11-15 11:44:27.469532] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:02.109 [2024-11-15 11:44:27.469605] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1098451 ] 00:21:02.109 [2024-11-15 11:44:27.557060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.109 [2024-11-15 11:44:27.591441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.048 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:03.048 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:03.048 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.KoZLvk7oI9 00:21:03.048 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:03.309 [2024-11-15 11:44:28.586322] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.309 nvme0n1 00:21:03.309 11:44:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:03.309 Running I/O for 1 seconds... 00:21:04.691 5500.00 IOPS, 21.48 MiB/s 00:21:04.691 Latency(us) 00:21:04.691 [2024-11-15T10:44:30.190Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.692 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:04.692 Verification LBA range: start 0x0 length 0x2000 00:21:04.692 nvme0n1 : 1.02 5545.14 21.66 0.00 0.00 22935.43 5379.41 69468.16 00:21:04.692 [2024-11-15T10:44:30.190Z] =================================================================================================================== 00:21:04.692 [2024-11-15T10:44:30.190Z] Total : 5545.14 21.66 0.00 0.00 22935.43 5379.41 69468.16 00:21:04.692 { 00:21:04.692 "results": [ 00:21:04.692 { 00:21:04.692 "job": "nvme0n1", 00:21:04.692 "core_mask": "0x2", 00:21:04.692 "workload": "verify", 00:21:04.692 "status": "finished", 00:21:04.692 "verify_range": { 00:21:04.692 "start": 0, 00:21:04.692 "length": 8192 00:21:04.692 }, 00:21:04.692 "queue_depth": 128, 00:21:04.692 "io_size": 4096, 00:21:04.692 "runtime": 1.015123, 00:21:04.692 "iops": 5545.140835150026, 00:21:04.692 "mibps": 21.66070638730479, 00:21:04.692 "io_failed": 0, 00:21:04.692 "io_timeout": 0, 00:21:04.692 "avg_latency_us": 22935.431572215315, 00:21:04.692 "min_latency_us": 5379.413333333333, 00:21:04.692 "max_latency_us": 69468.16 00:21:04.692 } 00:21:04.692 ], 00:21:04.692 "core_count": 1 00:21:04.692 } 00:21:04.692 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:04.692 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.692 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.692 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.692 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:04.692 "subsystems": [ 00:21:04.692 { 00:21:04.692 "subsystem": "keyring", 00:21:04.692 "config": [ 00:21:04.692 { 00:21:04.692 "method": "keyring_file_add_key", 00:21:04.692 "params": { 00:21:04.692 "name": "key0", 00:21:04.692 "path": "/tmp/tmp.KoZLvk7oI9" 00:21:04.692 } 00:21:04.692 } 00:21:04.692 ] 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "subsystem": "iobuf", 00:21:04.692 "config": [ 00:21:04.692 { 00:21:04.692 "method": "iobuf_set_options", 00:21:04.692 "params": { 00:21:04.692 "small_pool_count": 8192, 00:21:04.692 "large_pool_count": 1024, 00:21:04.692 "small_bufsize": 8192, 00:21:04.692 "large_bufsize": 135168, 00:21:04.692 "enable_numa": false 00:21:04.692 } 00:21:04.692 } 00:21:04.692 ] 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "subsystem": "sock", 00:21:04.692 "config": [ 00:21:04.692 { 00:21:04.692 "method": "sock_set_default_impl", 00:21:04.692 "params": { 00:21:04.692 "impl_name": "posix" 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "sock_impl_set_options", 00:21:04.692 "params": { 00:21:04.692 "impl_name": "ssl", 00:21:04.692 "recv_buf_size": 4096, 00:21:04.692 "send_buf_size": 4096, 00:21:04.692 "enable_recv_pipe": true, 00:21:04.692 "enable_quickack": false, 00:21:04.692 "enable_placement_id": 0, 00:21:04.692 "enable_zerocopy_send_server": true, 00:21:04.692 "enable_zerocopy_send_client": false, 00:21:04.692 "zerocopy_threshold": 0, 00:21:04.692 "tls_version": 0, 00:21:04.692 "enable_ktls": false 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "sock_impl_set_options", 00:21:04.692 "params": { 00:21:04.692 "impl_name": "posix", 00:21:04.692 "recv_buf_size": 2097152, 00:21:04.692 "send_buf_size": 2097152, 00:21:04.692 "enable_recv_pipe": true, 00:21:04.692 "enable_quickack": false, 00:21:04.692 "enable_placement_id": 0, 00:21:04.692 "enable_zerocopy_send_server": true, 00:21:04.692 "enable_zerocopy_send_client": false, 00:21:04.692 "zerocopy_threshold": 0, 00:21:04.692 "tls_version": 0, 00:21:04.692 "enable_ktls": false 00:21:04.692 } 00:21:04.692 } 00:21:04.692 ] 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "subsystem": "vmd", 00:21:04.692 "config": [] 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "subsystem": "accel", 00:21:04.692 "config": [ 00:21:04.692 { 00:21:04.692 "method": "accel_set_options", 00:21:04.692 "params": { 00:21:04.692 "small_cache_size": 128, 00:21:04.692 "large_cache_size": 16, 00:21:04.692 "task_count": 2048, 00:21:04.692 "sequence_count": 2048, 00:21:04.692 "buf_count": 2048 00:21:04.692 } 00:21:04.692 } 00:21:04.692 ] 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "subsystem": "bdev", 00:21:04.692 "config": [ 00:21:04.692 { 00:21:04.692 "method": "bdev_set_options", 00:21:04.692 "params": { 00:21:04.692 "bdev_io_pool_size": 65535, 00:21:04.692 "bdev_io_cache_size": 256, 00:21:04.692 "bdev_auto_examine": true, 00:21:04.692 "iobuf_small_cache_size": 128, 00:21:04.692 "iobuf_large_cache_size": 16 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "bdev_raid_set_options", 00:21:04.692 "params": { 00:21:04.692 "process_window_size_kb": 1024, 00:21:04.692 "process_max_bandwidth_mb_sec": 0 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "bdev_iscsi_set_options", 00:21:04.692 "params": { 00:21:04.692 "timeout_sec": 30 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "bdev_nvme_set_options", 00:21:04.692 "params": { 00:21:04.692 "action_on_timeout": "none", 00:21:04.692 "timeout_us": 0, 00:21:04.692 "timeout_admin_us": 0, 00:21:04.692 "keep_alive_timeout_ms": 10000, 00:21:04.692 "arbitration_burst": 0, 00:21:04.692 "low_priority_weight": 0, 00:21:04.692 "medium_priority_weight": 0, 00:21:04.692 "high_priority_weight": 0, 00:21:04.692 "nvme_adminq_poll_period_us": 10000, 00:21:04.692 "nvme_ioq_poll_period_us": 0, 00:21:04.692 "io_queue_requests": 0, 00:21:04.692 "delay_cmd_submit": true, 00:21:04.692 "transport_retry_count": 4, 00:21:04.692 "bdev_retry_count": 3, 00:21:04.692 "transport_ack_timeout": 0, 00:21:04.692 "ctrlr_loss_timeout_sec": 0, 00:21:04.692 "reconnect_delay_sec": 0, 00:21:04.692 "fast_io_fail_timeout_sec": 0, 00:21:04.692 "disable_auto_failback": false, 00:21:04.692 "generate_uuids": false, 00:21:04.692 "transport_tos": 0, 00:21:04.692 "nvme_error_stat": false, 00:21:04.692 "rdma_srq_size": 0, 00:21:04.692 "io_path_stat": false, 00:21:04.692 "allow_accel_sequence": false, 00:21:04.692 "rdma_max_cq_size": 0, 00:21:04.692 "rdma_cm_event_timeout_ms": 0, 00:21:04.692 "dhchap_digests": [ 00:21:04.692 "sha256", 00:21:04.692 "sha384", 00:21:04.692 "sha512" 00:21:04.692 ], 00:21:04.692 "dhchap_dhgroups": [ 00:21:04.692 "null", 00:21:04.692 "ffdhe2048", 00:21:04.692 "ffdhe3072", 00:21:04.692 "ffdhe4096", 00:21:04.692 "ffdhe6144", 00:21:04.692 "ffdhe8192" 00:21:04.692 ] 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "bdev_nvme_set_hotplug", 00:21:04.692 "params": { 00:21:04.692 "period_us": 100000, 00:21:04.692 "enable": false 00:21:04.692 } 00:21:04.692 }, 00:21:04.692 { 00:21:04.692 "method": "bdev_malloc_create", 00:21:04.692 "params": { 00:21:04.692 "name": "malloc0", 00:21:04.692 "num_blocks": 8192, 00:21:04.692 "block_size": 4096, 00:21:04.692 "physical_block_size": 4096, 00:21:04.692 "uuid": "82500687-3809-4426-8afe-83bedfacf823", 00:21:04.692 "optimal_io_boundary": 0, 00:21:04.692 "md_size": 0, 00:21:04.692 "dif_type": 0, 00:21:04.692 "dif_is_head_of_md": false, 00:21:04.692 "dif_pi_format": 0 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "bdev_wait_for_examine" 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "subsystem": "nbd", 00:21:04.693 "config": [] 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "subsystem": "scheduler", 00:21:04.693 "config": [ 00:21:04.693 { 00:21:04.693 "method": "framework_set_scheduler", 00:21:04.693 "params": { 00:21:04.693 "name": "static" 00:21:04.693 } 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "subsystem": "nvmf", 00:21:04.693 "config": [ 00:21:04.693 { 00:21:04.693 "method": "nvmf_set_config", 00:21:04.693 "params": { 00:21:04.693 "discovery_filter": "match_any", 00:21:04.693 "admin_cmd_passthru": { 00:21:04.693 "identify_ctrlr": false 00:21:04.693 }, 00:21:04.693 "dhchap_digests": [ 00:21:04.693 "sha256", 00:21:04.693 "sha384", 00:21:04.693 "sha512" 00:21:04.693 ], 00:21:04.693 "dhchap_dhgroups": [ 00:21:04.693 "null", 00:21:04.693 "ffdhe2048", 00:21:04.693 "ffdhe3072", 00:21:04.693 "ffdhe4096", 00:21:04.693 "ffdhe6144", 00:21:04.693 "ffdhe8192" 00:21:04.693 ] 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_set_max_subsystems", 00:21:04.693 "params": { 00:21:04.693 "max_subsystems": 1024 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_set_crdt", 00:21:04.693 "params": { 00:21:04.693 "crdt1": 0, 00:21:04.693 "crdt2": 0, 00:21:04.693 "crdt3": 0 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_create_transport", 00:21:04.693 "params": { 00:21:04.693 "trtype": "TCP", 00:21:04.693 "max_queue_depth": 128, 00:21:04.693 "max_io_qpairs_per_ctrlr": 127, 00:21:04.693 "in_capsule_data_size": 4096, 00:21:04.693 "max_io_size": 131072, 00:21:04.693 "io_unit_size": 131072, 00:21:04.693 "max_aq_depth": 128, 00:21:04.693 "num_shared_buffers": 511, 00:21:04.693 "buf_cache_size": 4294967295, 00:21:04.693 "dif_insert_or_strip": false, 00:21:04.693 "zcopy": false, 00:21:04.693 "c2h_success": false, 00:21:04.693 "sock_priority": 0, 00:21:04.693 "abort_timeout_sec": 1, 00:21:04.693 "ack_timeout": 0, 00:21:04.693 "data_wr_pool_size": 0 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_create_subsystem", 00:21:04.693 "params": { 00:21:04.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.693 "allow_any_host": false, 00:21:04.693 "serial_number": "00000000000000000000", 00:21:04.693 "model_number": "SPDK bdev Controller", 00:21:04.693 "max_namespaces": 32, 00:21:04.693 "min_cntlid": 1, 00:21:04.693 "max_cntlid": 65519, 00:21:04.693 "ana_reporting": false 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_subsystem_add_host", 00:21:04.693 "params": { 00:21:04.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.693 "host": "nqn.2016-06.io.spdk:host1", 00:21:04.693 "psk": "key0" 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_subsystem_add_ns", 00:21:04.693 "params": { 00:21:04.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.693 "namespace": { 00:21:04.693 "nsid": 1, 00:21:04.693 "bdev_name": "malloc0", 00:21:04.693 "nguid": "82500687380944268AFE83BEDFACF823", 00:21:04.693 "uuid": "82500687-3809-4426-8afe-83bedfacf823", 00:21:04.693 "no_auto_visible": false 00:21:04.693 } 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "nvmf_subsystem_add_listener", 00:21:04.693 "params": { 00:21:04.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.693 "listen_address": { 00:21:04.693 "trtype": "TCP", 00:21:04.693 "adrfam": "IPv4", 00:21:04.693 "traddr": "10.0.0.2", 00:21:04.693 "trsvcid": "4420" 00:21:04.693 }, 00:21:04.693 "secure_channel": false, 00:21:04.693 "sock_impl": "ssl" 00:21:04.693 } 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 }' 00:21:04.693 11:44:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:04.693 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:04.693 "subsystems": [ 00:21:04.693 { 00:21:04.693 "subsystem": "keyring", 00:21:04.693 "config": [ 00:21:04.693 { 00:21:04.693 "method": "keyring_file_add_key", 00:21:04.693 "params": { 00:21:04.693 "name": "key0", 00:21:04.693 "path": "/tmp/tmp.KoZLvk7oI9" 00:21:04.693 } 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "subsystem": "iobuf", 00:21:04.693 "config": [ 00:21:04.693 { 00:21:04.693 "method": "iobuf_set_options", 00:21:04.693 "params": { 00:21:04.693 "small_pool_count": 8192, 00:21:04.693 "large_pool_count": 1024, 00:21:04.693 "small_bufsize": 8192, 00:21:04.693 "large_bufsize": 135168, 00:21:04.693 "enable_numa": false 00:21:04.693 } 00:21:04.693 } 00:21:04.693 ] 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "subsystem": "sock", 00:21:04.693 "config": [ 00:21:04.693 { 00:21:04.693 "method": "sock_set_default_impl", 00:21:04.693 "params": { 00:21:04.693 "impl_name": "posix" 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "sock_impl_set_options", 00:21:04.693 "params": { 00:21:04.693 "impl_name": "ssl", 00:21:04.693 "recv_buf_size": 4096, 00:21:04.693 "send_buf_size": 4096, 00:21:04.693 "enable_recv_pipe": true, 00:21:04.693 "enable_quickack": false, 00:21:04.693 "enable_placement_id": 0, 00:21:04.693 "enable_zerocopy_send_server": true, 00:21:04.693 "enable_zerocopy_send_client": false, 00:21:04.693 "zerocopy_threshold": 0, 00:21:04.693 "tls_version": 0, 00:21:04.693 "enable_ktls": false 00:21:04.693 } 00:21:04.693 }, 00:21:04.693 { 00:21:04.693 "method": "sock_impl_set_options", 00:21:04.693 "params": { 00:21:04.693 "impl_name": "posix", 00:21:04.693 "recv_buf_size": 2097152, 00:21:04.693 "send_buf_size": 2097152, 00:21:04.693 "enable_recv_pipe": true, 00:21:04.693 "enable_quickack": false, 00:21:04.693 "enable_placement_id": 0, 00:21:04.693 "enable_zerocopy_send_server": true, 00:21:04.693 "enable_zerocopy_send_client": false, 00:21:04.693 "zerocopy_threshold": 0, 00:21:04.693 "tls_version": 0, 00:21:04.693 "enable_ktls": false 00:21:04.694 } 00:21:04.694 } 00:21:04.694 ] 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "subsystem": "vmd", 00:21:04.694 "config": [] 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "subsystem": "accel", 00:21:04.694 "config": [ 00:21:04.694 { 00:21:04.694 "method": "accel_set_options", 00:21:04.694 "params": { 00:21:04.694 "small_cache_size": 128, 00:21:04.694 "large_cache_size": 16, 00:21:04.694 "task_count": 2048, 00:21:04.694 "sequence_count": 2048, 00:21:04.694 "buf_count": 2048 00:21:04.694 } 00:21:04.694 } 00:21:04.694 ] 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "subsystem": "bdev", 00:21:04.694 "config": [ 00:21:04.694 { 00:21:04.694 "method": "bdev_set_options", 00:21:04.694 "params": { 00:21:04.694 "bdev_io_pool_size": 65535, 00:21:04.694 "bdev_io_cache_size": 256, 00:21:04.694 "bdev_auto_examine": true, 00:21:04.694 "iobuf_small_cache_size": 128, 00:21:04.694 "iobuf_large_cache_size": 16 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_raid_set_options", 00:21:04.694 "params": { 00:21:04.694 "process_window_size_kb": 1024, 00:21:04.694 "process_max_bandwidth_mb_sec": 0 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_iscsi_set_options", 00:21:04.694 "params": { 00:21:04.694 "timeout_sec": 30 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_nvme_set_options", 00:21:04.694 "params": { 00:21:04.694 "action_on_timeout": "none", 00:21:04.694 "timeout_us": 0, 00:21:04.694 "timeout_admin_us": 0, 00:21:04.694 "keep_alive_timeout_ms": 10000, 00:21:04.694 "arbitration_burst": 0, 00:21:04.694 "low_priority_weight": 0, 00:21:04.694 "medium_priority_weight": 0, 00:21:04.694 "high_priority_weight": 0, 00:21:04.694 "nvme_adminq_poll_period_us": 10000, 00:21:04.694 "nvme_ioq_poll_period_us": 0, 00:21:04.694 "io_queue_requests": 512, 00:21:04.694 "delay_cmd_submit": true, 00:21:04.694 "transport_retry_count": 4, 00:21:04.694 "bdev_retry_count": 3, 00:21:04.694 "transport_ack_timeout": 0, 00:21:04.694 "ctrlr_loss_timeout_sec": 0, 00:21:04.694 "reconnect_delay_sec": 0, 00:21:04.694 "fast_io_fail_timeout_sec": 0, 00:21:04.694 "disable_auto_failback": false, 00:21:04.694 "generate_uuids": false, 00:21:04.694 "transport_tos": 0, 00:21:04.694 "nvme_error_stat": false, 00:21:04.694 "rdma_srq_size": 0, 00:21:04.694 "io_path_stat": false, 00:21:04.694 "allow_accel_sequence": false, 00:21:04.694 "rdma_max_cq_size": 0, 00:21:04.694 "rdma_cm_event_timeout_ms": 0, 00:21:04.694 "dhchap_digests": [ 00:21:04.694 "sha256", 00:21:04.694 "sha384", 00:21:04.694 "sha512" 00:21:04.694 ], 00:21:04.694 "dhchap_dhgroups": [ 00:21:04.694 "null", 00:21:04.694 "ffdhe2048", 00:21:04.694 "ffdhe3072", 00:21:04.694 "ffdhe4096", 00:21:04.694 "ffdhe6144", 00:21:04.694 "ffdhe8192" 00:21:04.694 ] 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_nvme_attach_controller", 00:21:04.694 "params": { 00:21:04.694 "name": "nvme0", 00:21:04.694 "trtype": "TCP", 00:21:04.694 "adrfam": "IPv4", 00:21:04.694 "traddr": "10.0.0.2", 00:21:04.694 "trsvcid": "4420", 00:21:04.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:04.694 "prchk_reftag": false, 00:21:04.694 "prchk_guard": false, 00:21:04.694 "ctrlr_loss_timeout_sec": 0, 00:21:04.694 "reconnect_delay_sec": 0, 00:21:04.694 "fast_io_fail_timeout_sec": 0, 00:21:04.694 "psk": "key0", 00:21:04.694 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:04.694 "hdgst": false, 00:21:04.694 "ddgst": false, 00:21:04.694 "multipath": "multipath" 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_nvme_set_hotplug", 00:21:04.694 "params": { 00:21:04.694 "period_us": 100000, 00:21:04.694 "enable": false 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_enable_histogram", 00:21:04.694 "params": { 00:21:04.694 "name": "nvme0n1", 00:21:04.694 "enable": true 00:21:04.694 } 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "method": "bdev_wait_for_examine" 00:21:04.694 } 00:21:04.694 ] 00:21:04.694 }, 00:21:04.694 { 00:21:04.694 "subsystem": "nbd", 00:21:04.694 "config": [] 00:21:04.694 } 00:21:04.694 ] 00:21:04.694 }' 00:21:04.694 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1098451 00:21:04.694 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1098451 ']' 00:21:04.694 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1098451 00:21:04.694 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:04.694 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1098451 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1098451' 00:21:04.955 killing process with pid 1098451 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1098451 00:21:04.955 Received shutdown signal, test time was about 1.000000 seconds 00:21:04.955 00:21:04.955 Latency(us) 00:21:04.955 [2024-11-15T10:44:30.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:04.955 [2024-11-15T10:44:30.453Z] =================================================================================================================== 00:21:04.955 [2024-11-15T10:44:30.453Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1098451 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1098107 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1098107 ']' 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1098107 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1098107 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1098107' 00:21:04.955 killing process with pid 1098107 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1098107 00:21:04.955 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1098107 00:21:05.217 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:05.217 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:05.217 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.217 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.217 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:05.217 "subsystems": [ 00:21:05.217 { 00:21:05.217 "subsystem": "keyring", 00:21:05.217 "config": [ 00:21:05.217 { 00:21:05.217 "method": "keyring_file_add_key", 00:21:05.217 "params": { 00:21:05.217 "name": "key0", 00:21:05.217 "path": "/tmp/tmp.KoZLvk7oI9" 00:21:05.217 } 00:21:05.217 } 00:21:05.217 ] 00:21:05.217 }, 00:21:05.217 { 00:21:05.217 "subsystem": "iobuf", 00:21:05.217 "config": [ 00:21:05.217 { 00:21:05.217 "method": "iobuf_set_options", 00:21:05.217 "params": { 00:21:05.217 "small_pool_count": 8192, 00:21:05.217 "large_pool_count": 1024, 00:21:05.217 "small_bufsize": 8192, 00:21:05.217 "large_bufsize": 135168, 00:21:05.217 "enable_numa": false 00:21:05.217 } 00:21:05.217 } 00:21:05.217 ] 00:21:05.217 }, 00:21:05.217 { 00:21:05.217 "subsystem": "sock", 00:21:05.217 "config": [ 00:21:05.217 { 00:21:05.217 "method": "sock_set_default_impl", 00:21:05.217 "params": { 00:21:05.217 "impl_name": "posix" 00:21:05.217 } 00:21:05.217 }, 00:21:05.217 { 00:21:05.217 "method": "sock_impl_set_options", 00:21:05.217 "params": { 00:21:05.217 "impl_name": "ssl", 00:21:05.217 "recv_buf_size": 4096, 00:21:05.217 "send_buf_size": 4096, 00:21:05.217 "enable_recv_pipe": true, 00:21:05.217 "enable_quickack": false, 00:21:05.217 "enable_placement_id": 0, 00:21:05.217 "enable_zerocopy_send_server": true, 00:21:05.218 "enable_zerocopy_send_client": false, 00:21:05.218 "zerocopy_threshold": 0, 00:21:05.218 "tls_version": 0, 00:21:05.218 "enable_ktls": false 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "sock_impl_set_options", 00:21:05.218 "params": { 00:21:05.218 "impl_name": "posix", 00:21:05.218 "recv_buf_size": 2097152, 00:21:05.218 "send_buf_size": 2097152, 00:21:05.218 "enable_recv_pipe": true, 00:21:05.218 "enable_quickack": false, 00:21:05.218 "enable_placement_id": 0, 00:21:05.218 "enable_zerocopy_send_server": true, 00:21:05.218 "enable_zerocopy_send_client": false, 00:21:05.218 "zerocopy_threshold": 0, 00:21:05.218 "tls_version": 0, 00:21:05.218 "enable_ktls": false 00:21:05.218 } 00:21:05.218 } 00:21:05.218 ] 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "subsystem": "vmd", 00:21:05.218 "config": [] 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "subsystem": "accel", 00:21:05.218 "config": [ 00:21:05.218 { 00:21:05.218 "method": "accel_set_options", 00:21:05.218 "params": { 00:21:05.218 "small_cache_size": 128, 00:21:05.218 "large_cache_size": 16, 00:21:05.218 "task_count": 2048, 00:21:05.218 "sequence_count": 2048, 00:21:05.218 "buf_count": 2048 00:21:05.218 } 00:21:05.218 } 00:21:05.218 ] 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "subsystem": "bdev", 00:21:05.218 "config": [ 00:21:05.218 { 00:21:05.218 "method": "bdev_set_options", 00:21:05.218 "params": { 00:21:05.218 "bdev_io_pool_size": 65535, 00:21:05.218 "bdev_io_cache_size": 256, 00:21:05.218 "bdev_auto_examine": true, 00:21:05.218 "iobuf_small_cache_size": 128, 00:21:05.218 "iobuf_large_cache_size": 16 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "bdev_raid_set_options", 00:21:05.218 "params": { 00:21:05.218 "process_window_size_kb": 1024, 00:21:05.218 "process_max_bandwidth_mb_sec": 0 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "bdev_iscsi_set_options", 00:21:05.218 "params": { 00:21:05.218 "timeout_sec": 30 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "bdev_nvme_set_options", 00:21:05.218 "params": { 00:21:05.218 "action_on_timeout": "none", 00:21:05.218 "timeout_us": 0, 00:21:05.218 "timeout_admin_us": 0, 00:21:05.218 "keep_alive_timeout_ms": 10000, 00:21:05.218 "arbitration_burst": 0, 00:21:05.218 "low_priority_weight": 0, 00:21:05.218 "medium_priority_weight": 0, 00:21:05.218 "high_priority_weight": 0, 00:21:05.218 "nvme_adminq_poll_period_us": 10000, 00:21:05.218 "nvme_ioq_poll_period_us": 0, 00:21:05.218 "io_queue_requests": 0, 00:21:05.218 "delay_cmd_submit": true, 00:21:05.218 "transport_retry_count": 4, 00:21:05.218 "bdev_retry_count": 3, 00:21:05.218 "transport_ack_timeout": 0, 00:21:05.218 "ctrlr_loss_timeout_sec": 0, 00:21:05.218 "reconnect_delay_sec": 0, 00:21:05.218 "fast_io_fail_timeout_sec": 0, 00:21:05.218 "disable_auto_failback": false, 00:21:05.218 "generate_uuids": false, 00:21:05.218 "transport_tos": 0, 00:21:05.218 "nvme_error_stat": false, 00:21:05.218 "rdma_srq_size": 0, 00:21:05.218 "io_path_stat": false, 00:21:05.218 "allow_accel_sequence": false, 00:21:05.218 "rdma_max_cq_size": 0, 00:21:05.218 "rdma_cm_event_timeout_ms": 0, 00:21:05.218 "dhchap_digests": [ 00:21:05.218 "sha256", 00:21:05.218 "sha384", 00:21:05.218 "sha512" 00:21:05.218 ], 00:21:05.218 "dhchap_dhgroups": [ 00:21:05.218 "null", 00:21:05.218 "ffdhe2048", 00:21:05.218 "ffdhe3072", 00:21:05.218 "ffdhe4096", 00:21:05.218 "ffdhe6144", 00:21:05.218 "ffdhe8192" 00:21:05.218 ] 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "bdev_nvme_set_hotplug", 00:21:05.218 "params": { 00:21:05.218 "period_us": 100000, 00:21:05.218 "enable": false 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "bdev_malloc_create", 00:21:05.218 "params": { 00:21:05.218 "name": "malloc0", 00:21:05.218 "num_blocks": 8192, 00:21:05.218 "block_size": 4096, 00:21:05.218 "physical_block_size": 4096, 00:21:05.218 "uuid": "82500687-3809-4426-8afe-83bedfacf823", 00:21:05.218 "optimal_io_boundary": 0, 00:21:05.218 "md_size": 0, 00:21:05.218 "dif_type": 0, 00:21:05.218 "dif_is_head_of_md": false, 00:21:05.218 "dif_pi_format": 0 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "bdev_wait_for_examine" 00:21:05.218 } 00:21:05.218 ] 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "subsystem": "nbd", 00:21:05.218 "config": [] 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "subsystem": "scheduler", 00:21:05.218 "config": [ 00:21:05.218 { 00:21:05.218 "method": "framework_set_scheduler", 00:21:05.218 "params": { 00:21:05.218 "name": "static" 00:21:05.218 } 00:21:05.218 } 00:21:05.218 ] 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "subsystem": "nvmf", 00:21:05.218 "config": [ 00:21:05.218 { 00:21:05.218 "method": "nvmf_set_config", 00:21:05.218 "params": { 00:21:05.218 "discovery_filter": "match_any", 00:21:05.218 "admin_cmd_passthru": { 00:21:05.218 "identify_ctrlr": false 00:21:05.218 }, 00:21:05.218 "dhchap_digests": [ 00:21:05.218 "sha256", 00:21:05.218 "sha384", 00:21:05.218 "sha512" 00:21:05.218 ], 00:21:05.218 "dhchap_dhgroups": [ 00:21:05.218 "null", 00:21:05.218 "ffdhe2048", 00:21:05.218 "ffdhe3072", 00:21:05.218 "ffdhe4096", 00:21:05.218 "ffdhe6144", 00:21:05.218 "ffdhe8192" 00:21:05.218 ] 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_set_max_subsystems", 00:21:05.218 "params": { 00:21:05.218 "max_subsystems": 1024 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_set_crdt", 00:21:05.218 "params": { 00:21:05.218 "crdt1": 0, 00:21:05.218 "crdt2": 0, 00:21:05.218 "crdt3": 0 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_create_transport", 00:21:05.218 "params": { 00:21:05.218 "trtype": "TCP", 00:21:05.218 "max_queue_depth": 128, 00:21:05.218 "max_io_qpairs_per_ctrlr": 127, 00:21:05.218 "in_capsule_data_size": 4096, 00:21:05.218 "max_io_size": 131072, 00:21:05.218 "io_unit_size": 131072, 00:21:05.218 "max_aq_depth": 128, 00:21:05.218 "num_shared_buffers": 511, 00:21:05.218 "buf_cache_size": 4294967295, 00:21:05.218 "dif_insert_or_strip": false, 00:21:05.218 "zcopy": false, 00:21:05.218 "c2h_success": false, 00:21:05.218 "sock_priority": 0, 00:21:05.218 "abort_timeout_sec": 1, 00:21:05.218 "ack_timeout": 0, 00:21:05.218 "data_wr_pool_size": 0 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_create_subsystem", 00:21:05.218 "params": { 00:21:05.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.218 "allow_any_host": false, 00:21:05.218 "serial_number": "00000000000000000000", 00:21:05.218 "model_number": "SPDK bdev Controller", 00:21:05.218 "max_namespaces": 32, 00:21:05.218 "min_cntlid": 1, 00:21:05.218 "max_cntlid": 65519, 00:21:05.218 "ana_reporting": false 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_subsystem_add_host", 00:21:05.218 "params": { 00:21:05.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.218 "host": "nqn.2016-06.io.spdk:host1", 00:21:05.218 "psk": "key0" 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_subsystem_add_ns", 00:21:05.218 "params": { 00:21:05.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.218 "namespace": { 00:21:05.218 "nsid": 1, 00:21:05.218 "bdev_name": "malloc0", 00:21:05.218 "nguid": "82500687380944268AFE83BEDFACF823", 00:21:05.218 "uuid": "82500687-3809-4426-8afe-83bedfacf823", 00:21:05.218 "no_auto_visible": false 00:21:05.218 } 00:21:05.218 } 00:21:05.218 }, 00:21:05.218 { 00:21:05.218 "method": "nvmf_subsystem_add_listener", 00:21:05.218 "params": { 00:21:05.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:05.218 "listen_address": { 00:21:05.218 "trtype": "TCP", 00:21:05.218 "adrfam": "IPv4", 00:21:05.218 "traddr": "10.0.0.2", 00:21:05.218 "trsvcid": "4420" 00:21:05.218 }, 00:21:05.218 "secure_channel": false, 00:21:05.218 "sock_impl": "ssl" 00:21:05.218 } 00:21:05.218 } 00:21:05.218 ] 00:21:05.218 } 00:21:05.218 ] 00:21:05.218 }' 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1099050 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1099050 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1099050 ']' 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.218 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:05.219 11:44:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:05.219 [2024-11-15 11:44:30.590928] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:05.219 [2024-11-15 11:44:30.590988] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.219 [2024-11-15 11:44:30.681606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.219 [2024-11-15 11:44:30.712714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.219 [2024-11-15 11:44:30.712741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.219 [2024-11-15 11:44:30.712747] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.219 [2024-11-15 11:44:30.712751] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.219 [2024-11-15 11:44:30.712755] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.219 [2024-11-15 11:44:30.713264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:05.480 [2024-11-15 11:44:30.907372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.480 [2024-11-15 11:44:30.939406] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.480 [2024-11-15 11:44:30.939615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1099169 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1099169 /var/tmp/bdevperf.sock 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 1099169 ']' 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:06.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:06.051 11:44:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:06.051 "subsystems": [ 00:21:06.051 { 00:21:06.051 "subsystem": "keyring", 00:21:06.051 "config": [ 00:21:06.051 { 00:21:06.051 "method": "keyring_file_add_key", 00:21:06.051 "params": { 00:21:06.051 "name": "key0", 00:21:06.051 "path": "/tmp/tmp.KoZLvk7oI9" 00:21:06.051 } 00:21:06.051 } 00:21:06.051 ] 00:21:06.051 }, 00:21:06.051 { 00:21:06.051 "subsystem": "iobuf", 00:21:06.051 "config": [ 00:21:06.051 { 00:21:06.051 "method": "iobuf_set_options", 00:21:06.051 "params": { 00:21:06.051 "small_pool_count": 8192, 00:21:06.051 "large_pool_count": 1024, 00:21:06.051 "small_bufsize": 8192, 00:21:06.052 "large_bufsize": 135168, 00:21:06.052 "enable_numa": false 00:21:06.052 } 00:21:06.052 } 00:21:06.052 ] 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "subsystem": "sock", 00:21:06.052 "config": [ 00:21:06.052 { 00:21:06.052 "method": "sock_set_default_impl", 00:21:06.052 "params": { 00:21:06.052 "impl_name": "posix" 00:21:06.052 } 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "method": "sock_impl_set_options", 00:21:06.052 "params": { 00:21:06.052 "impl_name": "ssl", 00:21:06.052 "recv_buf_size": 4096, 00:21:06.052 "send_buf_size": 4096, 00:21:06.052 "enable_recv_pipe": true, 00:21:06.052 "enable_quickack": false, 00:21:06.052 "enable_placement_id": 0, 00:21:06.052 "enable_zerocopy_send_server": true, 00:21:06.052 "enable_zerocopy_send_client": false, 00:21:06.052 "zerocopy_threshold": 0, 00:21:06.052 "tls_version": 0, 00:21:06.052 "enable_ktls": false 00:21:06.052 } 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "method": "sock_impl_set_options", 00:21:06.052 "params": { 00:21:06.052 "impl_name": "posix", 00:21:06.052 "recv_buf_size": 2097152, 00:21:06.052 "send_buf_size": 2097152, 00:21:06.052 "enable_recv_pipe": true, 00:21:06.052 "enable_quickack": false, 00:21:06.052 "enable_placement_id": 0, 00:21:06.052 "enable_zerocopy_send_server": true, 00:21:06.052 "enable_zerocopy_send_client": false, 00:21:06.052 "zerocopy_threshold": 0, 00:21:06.052 "tls_version": 0, 00:21:06.052 "enable_ktls": false 00:21:06.052 } 00:21:06.052 } 00:21:06.052 ] 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "subsystem": "vmd", 00:21:06.052 "config": [] 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "subsystem": "accel", 00:21:06.052 "config": [ 00:21:06.052 { 00:21:06.052 "method": "accel_set_options", 00:21:06.052 "params": { 00:21:06.052 "small_cache_size": 128, 00:21:06.052 "large_cache_size": 16, 00:21:06.052 "task_count": 2048, 00:21:06.052 "sequence_count": 2048, 00:21:06.052 "buf_count": 2048 00:21:06.052 } 00:21:06.052 } 00:21:06.052 ] 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "subsystem": "bdev", 00:21:06.052 "config": [ 00:21:06.052 { 00:21:06.052 "method": "bdev_set_options", 00:21:06.052 "params": { 00:21:06.052 "bdev_io_pool_size": 65535, 00:21:06.052 "bdev_io_cache_size": 256, 00:21:06.052 "bdev_auto_examine": true, 00:21:06.052 "iobuf_small_cache_size": 128, 00:21:06.052 "iobuf_large_cache_size": 16 00:21:06.052 } 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "method": "bdev_raid_set_options", 00:21:06.052 "params": { 00:21:06.052 "process_window_size_kb": 1024, 00:21:06.052 "process_max_bandwidth_mb_sec": 0 00:21:06.052 } 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "method": "bdev_iscsi_set_options", 00:21:06.052 "params": { 00:21:06.052 "timeout_sec": 30 00:21:06.052 } 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "method": "bdev_nvme_set_options", 00:21:06.052 "params": { 00:21:06.052 "action_on_timeout": "none", 00:21:06.052 "timeout_us": 0, 00:21:06.052 "timeout_admin_us": 0, 00:21:06.052 "keep_alive_timeout_ms": 10000, 00:21:06.052 "arbitration_burst": 0, 00:21:06.052 "low_priority_weight": 0, 00:21:06.052 "medium_priority_weight": 0, 00:21:06.052 "high_priority_weight": 0, 00:21:06.052 "nvme_adminq_poll_period_us": 10000, 00:21:06.052 "nvme_ioq_poll_period_us": 0, 00:21:06.052 "io_queue_requests": 512, 00:21:06.052 "delay_cmd_submit": true, 00:21:06.052 "transport_retry_count": 4, 00:21:06.052 "bdev_retry_count": 3, 00:21:06.052 "transport_ack_timeout": 0, 00:21:06.052 "ctrlr_loss_timeout_sec": 0, 00:21:06.052 "reconnect_delay_sec": 0, 00:21:06.052 "fast_io_fail_timeout_sec": 0, 00:21:06.052 "disable_auto_failback": false, 00:21:06.052 "generate_uuids": false, 00:21:06.052 "transport_tos": 0, 00:21:06.052 "nvme_error_stat": false, 00:21:06.052 "rdma_srq_size": 0, 00:21:06.052 "io_path_stat": false, 00:21:06.052 "allow_accel_sequence": false, 00:21:06.052 "rdma_max_cq_size": 0, 00:21:06.052 "rdma_cm_event_timeout_ms": 0, 00:21:06.052 "dhchap_digests": [ 00:21:06.052 "sha256", 00:21:06.052 "sha384", 00:21:06.052 "sha512" 00:21:06.052 ], 00:21:06.052 "dhchap_dhgroups": [ 00:21:06.052 "null", 00:21:06.052 "ffdhe2048", 00:21:06.052 "ffdhe3072", 00:21:06.052 "ffdhe4096", 00:21:06.052 "ffdhe6144", 00:21:06.052 "ffdhe8192" 00:21:06.052 ] 00:21:06.052 } 00:21:06.052 }, 00:21:06.052 { 00:21:06.052 "method": "bdev_nvme_attach_controller", 00:21:06.052 "params": { 00:21:06.052 "name": "nvme0", 00:21:06.052 "trtype": "TCP", 00:21:06.052 "adrfam": "IPv4", 00:21:06.052 "traddr": "10.0.0.2", 00:21:06.052 "trsvcid": "4420", 00:21:06.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:06.052 "prchk_reftag": false, 00:21:06.052 "prchk_guard": false, 00:21:06.052 "ctrlr_loss_timeout_sec": 0, 00:21:06.052 "reconnect_delay_sec": 0, 00:21:06.052 "fast_io_fail_timeout_sec": 0, 00:21:06.052 "psk": "key0", 00:21:06.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:06.052 "hdgst": false, 00:21:06.052 "ddgst": false, 00:21:06.052 "multipath": "multipath" 00:21:06.053 } 00:21:06.053 }, 00:21:06.053 { 00:21:06.053 "method": "bdev_nvme_set_hotplug", 00:21:06.053 "params": { 00:21:06.053 "period_us": 100000, 00:21:06.053 "enable": false 00:21:06.053 } 00:21:06.053 }, 00:21:06.053 { 00:21:06.053 "method": "bdev_enable_histogram", 00:21:06.053 "params": { 00:21:06.053 "name": "nvme0n1", 00:21:06.053 "enable": true 00:21:06.053 } 00:21:06.053 }, 00:21:06.053 { 00:21:06.053 "method": "bdev_wait_for_examine" 00:21:06.053 } 00:21:06.053 ] 00:21:06.053 }, 00:21:06.053 { 00:21:06.053 "subsystem": "nbd", 00:21:06.053 "config": [] 00:21:06.053 } 00:21:06.053 ] 00:21:06.053 }' 00:21:06.053 [2024-11-15 11:44:31.479080] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:06.053 [2024-11-15 11:44:31.479135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1099169 ] 00:21:06.313 [2024-11-15 11:44:31.562487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.313 [2024-11-15 11:44:31.592398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.313 [2024-11-15 11:44:31.728293] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.886 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:06.886 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:06.886 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:06.886 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:07.147 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.147 11:44:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.147 Running I/O for 1 seconds... 00:21:08.088 5941.00 IOPS, 23.21 MiB/s 00:21:08.088 Latency(us) 00:21:08.088 [2024-11-15T10:44:33.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.088 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:08.088 Verification LBA range: start 0x0 length 0x2000 00:21:08.088 nvme0n1 : 1.02 5949.85 23.24 0.00 0.00 21338.42 4505.60 28617.39 00:21:08.088 [2024-11-15T10:44:33.586Z] =================================================================================================================== 00:21:08.088 [2024-11-15T10:44:33.586Z] Total : 5949.85 23.24 0.00 0.00 21338.42 4505.60 28617.39 00:21:08.088 { 00:21:08.088 "results": [ 00:21:08.088 { 00:21:08.088 "job": "nvme0n1", 00:21:08.088 "core_mask": "0x2", 00:21:08.088 "workload": "verify", 00:21:08.088 "status": "finished", 00:21:08.088 "verify_range": { 00:21:08.088 "start": 0, 00:21:08.088 "length": 8192 00:21:08.088 }, 00:21:08.088 "queue_depth": 128, 00:21:08.088 "io_size": 4096, 00:21:08.088 "runtime": 1.020193, 00:21:08.088 "iops": 5949.854586338075, 00:21:08.088 "mibps": 23.241619477883106, 00:21:08.088 "io_failed": 0, 00:21:08.088 "io_timeout": 0, 00:21:08.088 "avg_latency_us": 21338.416781987915, 00:21:08.088 "min_latency_us": 4505.6, 00:21:08.088 "max_latency_us": 28617.386666666665 00:21:08.088 } 00:21:08.088 ], 00:21:08.088 "core_count": 1 00:21:08.088 } 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:08.349 nvmf_trace.0 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1099169 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1099169 ']' 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1099169 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1099169 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1099169' 00:21:08.349 killing process with pid 1099169 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1099169 00:21:08.349 Received shutdown signal, test time was about 1.000000 seconds 00:21:08.349 00:21:08.349 Latency(us) 00:21:08.349 [2024-11-15T10:44:33.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:08.349 [2024-11-15T10:44:33.847Z] =================================================================================================================== 00:21:08.349 [2024-11-15T10:44:33.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:08.349 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1099169 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:08.609 rmmod nvme_tcp 00:21:08.609 rmmod nvme_fabrics 00:21:08.609 rmmod nvme_keyring 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1099050 ']' 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1099050 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 1099050 ']' 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 1099050 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1099050 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:08.609 11:44:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:08.609 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1099050' 00:21:08.609 killing process with pid 1099050 00:21:08.609 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 1099050 00:21:08.609 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 1099050 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:08.870 11:44:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.GdkkZjzBeO /tmp/tmp.yTLblbpUFJ /tmp/tmp.KoZLvk7oI9 00:21:10.783 00:21:10.783 real 1m26.305s 00:21:10.783 user 2m15.723s 00:21:10.783 sys 0m27.365s 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.783 ************************************ 00:21:10.783 END TEST nvmf_tls 00:21:10.783 ************************************ 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:10.783 ************************************ 00:21:10.783 START TEST nvmf_fips 00:21:10.783 ************************************ 00:21:10.783 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:11.045 * Looking for test storage... 00:21:11.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.045 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.046 --rc genhtml_branch_coverage=1 00:21:11.046 --rc genhtml_function_coverage=1 00:21:11.046 --rc genhtml_legend=1 00:21:11.046 --rc geninfo_all_blocks=1 00:21:11.046 --rc geninfo_unexecuted_blocks=1 00:21:11.046 00:21:11.046 ' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.046 --rc genhtml_branch_coverage=1 00:21:11.046 --rc genhtml_function_coverage=1 00:21:11.046 --rc genhtml_legend=1 00:21:11.046 --rc geninfo_all_blocks=1 00:21:11.046 --rc geninfo_unexecuted_blocks=1 00:21:11.046 00:21:11.046 ' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.046 --rc genhtml_branch_coverage=1 00:21:11.046 --rc genhtml_function_coverage=1 00:21:11.046 --rc genhtml_legend=1 00:21:11.046 --rc geninfo_all_blocks=1 00:21:11.046 --rc geninfo_unexecuted_blocks=1 00:21:11.046 00:21:11.046 ' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:11.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:11.046 --rc genhtml_branch_coverage=1 00:21:11.046 --rc genhtml_function_coverage=1 00:21:11.046 --rc genhtml_legend=1 00:21:11.046 --rc geninfo_all_blocks=1 00:21:11.046 --rc geninfo_unexecuted_blocks=1 00:21:11.046 00:21:11.046 ' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:11.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:11.046 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:11.309 Error setting digest 00:21:11.309 40F28DA6D27F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:21:11.309 40F28DA6D27F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:21:11.309 11:44:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.448 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.448 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.448 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.448 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:19.449 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:19.449 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:19.449 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:19.449 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.449 11:44:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.449 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.449 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:21:19.449 00:21:19.449 --- 10.0.0.2 ping statistics --- 00:21:19.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.449 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.449 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.449 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:21:19.449 00:21:19.449 --- 10.0.0.1 ping statistics --- 00:21:19.449 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.449 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1103873 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1103873 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1103873 ']' 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:19.449 11:44:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.449 [2024-11-15 11:44:44.221920] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:19.449 [2024-11-15 11:44:44.222001] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.449 [2024-11-15 11:44:44.323546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.449 [2024-11-15 11:44:44.374274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.449 [2024-11-15 11:44:44.374328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.449 [2024-11-15 11:44:44.374337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.449 [2024-11-15 11:44:44.374344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.450 [2024-11-15 11:44:44.374351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.450 [2024-11-15 11:44:44.375174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.2Fk 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.2Fk 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.2Fk 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.2Fk 00:21:19.710 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:19.971 [2024-11-15 11:44:45.243444] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.971 [2024-11-15 11:44:45.259430] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:19.971 [2024-11-15 11:44:45.259756] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.971 malloc0 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1104224 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1104224 /var/tmp/bdevperf.sock 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 1104224 ']' 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:19.971 11:44:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:19.971 [2024-11-15 11:44:45.402057] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:19.971 [2024-11-15 11:44:45.402134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1104224 ] 00:21:20.233 [2024-11-15 11:44:45.495276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.233 [2024-11-15 11:44:45.546192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.807 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:20.807 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:21:20.807 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.2Fk 00:21:21.069 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:21.069 [2024-11-15 11:44:46.537557] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:21.330 TLSTESTn1 00:21:21.330 11:44:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:21.330 Running I/O for 10 seconds... 00:21:23.696 3828.00 IOPS, 14.95 MiB/s [2024-11-15T10:44:49.864Z] 3930.00 IOPS, 15.35 MiB/s [2024-11-15T10:44:50.812Z] 3946.67 IOPS, 15.42 MiB/s [2024-11-15T10:44:51.753Z] 4265.25 IOPS, 16.66 MiB/s [2024-11-15T10:44:53.139Z] 4596.00 IOPS, 17.95 MiB/s [2024-11-15T10:44:54.082Z] 4817.17 IOPS, 18.82 MiB/s [2024-11-15T10:44:55.024Z] 4879.57 IOPS, 19.06 MiB/s [2024-11-15T10:44:55.967Z] 4904.50 IOPS, 19.16 MiB/s [2024-11-15T10:44:56.909Z] 5014.67 IOPS, 19.59 MiB/s [2024-11-15T10:44:56.909Z] 5035.30 IOPS, 19.67 MiB/s 00:21:31.411 Latency(us) 00:21:31.411 [2024-11-15T10:44:56.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.411 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:31.411 Verification LBA range: start 0x0 length 0x2000 00:21:31.411 TLSTESTn1 : 10.01 5041.03 19.69 0.00 0.00 25357.26 5707.09 69905.07 00:21:31.411 [2024-11-15T10:44:56.909Z] =================================================================================================================== 00:21:31.411 [2024-11-15T10:44:56.909Z] Total : 5041.03 19.69 0.00 0.00 25357.26 5707.09 69905.07 00:21:31.411 { 00:21:31.411 "results": [ 00:21:31.411 { 00:21:31.411 "job": "TLSTESTn1", 00:21:31.411 "core_mask": "0x4", 00:21:31.411 "workload": "verify", 00:21:31.412 "status": "finished", 00:21:31.412 "verify_range": { 00:21:31.412 "start": 0, 00:21:31.412 "length": 8192 00:21:31.412 }, 00:21:31.412 "queue_depth": 128, 00:21:31.412 "io_size": 4096, 00:21:31.412 "runtime": 10.013829, 00:21:31.412 "iops": 5041.0287613259625, 00:21:31.412 "mibps": 19.69151859892954, 00:21:31.412 "io_failed": 0, 00:21:31.412 "io_timeout": 0, 00:21:31.412 "avg_latency_us": 25357.25763549921, 00:21:31.412 "min_latency_us": 5707.093333333333, 00:21:31.412 "max_latency_us": 69905.06666666667 00:21:31.412 } 00:21:31.412 ], 00:21:31.412 "core_count": 1 00:21:31.412 } 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:31.412 nvmf_trace.0 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1104224 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1104224 ']' 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1104224 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.412 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1104224 00:21:31.673 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:31.673 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:31.673 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1104224' 00:21:31.673 killing process with pid 1104224 00:21:31.673 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1104224 00:21:31.673 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.673 00:21:31.673 Latency(us) 00:21:31.673 [2024-11-15T10:44:57.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.673 [2024-11-15T10:44:57.171Z] =================================================================================================================== 00:21:31.673 [2024-11-15T10:44:57.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.673 11:44:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1104224 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.673 rmmod nvme_tcp 00:21:31.673 rmmod nvme_fabrics 00:21:31.673 rmmod nvme_keyring 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1103873 ']' 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1103873 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 1103873 ']' 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 1103873 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:31.673 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1103873 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1103873' 00:21:31.934 killing process with pid 1103873 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 1103873 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 1103873 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.934 11:44:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.2Fk 00:21:34.481 00:21:34.481 real 0m23.127s 00:21:34.481 user 0m24.301s 00:21:34.481 sys 0m10.102s 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:34.481 ************************************ 00:21:34.481 END TEST nvmf_fips 00:21:34.481 ************************************ 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.481 ************************************ 00:21:34.481 START TEST nvmf_control_msg_list 00:21:34.481 ************************************ 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:34.481 * Looking for test storage... 00:21:34.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:34.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.481 --rc genhtml_branch_coverage=1 00:21:34.481 --rc genhtml_function_coverage=1 00:21:34.481 --rc genhtml_legend=1 00:21:34.481 --rc geninfo_all_blocks=1 00:21:34.481 --rc geninfo_unexecuted_blocks=1 00:21:34.481 00:21:34.481 ' 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:34.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.481 --rc genhtml_branch_coverage=1 00:21:34.481 --rc genhtml_function_coverage=1 00:21:34.481 --rc genhtml_legend=1 00:21:34.481 --rc geninfo_all_blocks=1 00:21:34.481 --rc geninfo_unexecuted_blocks=1 00:21:34.481 00:21:34.481 ' 00:21:34.481 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:34.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.482 --rc genhtml_branch_coverage=1 00:21:34.482 --rc genhtml_function_coverage=1 00:21:34.482 --rc genhtml_legend=1 00:21:34.482 --rc geninfo_all_blocks=1 00:21:34.482 --rc geninfo_unexecuted_blocks=1 00:21:34.482 00:21:34.482 ' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:34.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.482 --rc genhtml_branch_coverage=1 00:21:34.482 --rc genhtml_function_coverage=1 00:21:34.482 --rc genhtml_legend=1 00:21:34.482 --rc geninfo_all_blocks=1 00:21:34.482 --rc geninfo_unexecuted_blocks=1 00:21:34.482 00:21:34.482 ' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.482 11:44:59 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:42.631 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:42.631 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:42.631 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:42.631 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:42.632 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.632 11:45:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:42.632 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.632 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:21:42.632 00:21:42.632 --- 10.0.0.2 ping statistics --- 00:21:42.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.632 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:21:42.632 00:21:42.632 --- 10.0.0.1 ping statistics --- 00:21:42.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.632 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1110693 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1110693 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 1110693 ']' 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:42.632 11:45:07 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.632 [2024-11-15 11:45:07.346586] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:42.632 [2024-11-15 11:45:07.346653] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.632 [2024-11-15 11:45:07.449460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.632 [2024-11-15 11:45:07.501250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.632 [2024-11-15 11:45:07.501303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.632 [2024-11-15 11:45:07.501312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.632 [2024-11-15 11:45:07.501319] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.632 [2024-11-15 11:45:07.501324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.632 [2024-11-15 11:45:07.502125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 [2024-11-15 11:45:08.226032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 Malloc0 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:42.894 [2024-11-15 11:45:08.280514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1111043 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1111044 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1111045 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1111043 00:21:42.894 11:45:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:42.894 [2024-11-15 11:45:08.381090] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:43.155 [2024-11-15 11:45:08.391160] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:43.155 [2024-11-15 11:45:08.391489] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:44.100 Initializing NVMe Controllers 00:21:44.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:44.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:44.100 Initialization complete. Launching workers. 00:21:44.100 ======================================================== 00:21:44.100 Latency(us) 00:21:44.100 Device Information : IOPS MiB/s Average min max 00:21:44.100 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 1442.00 5.63 693.39 316.81 1223.94 00:21:44.100 ======================================================== 00:21:44.100 Total : 1442.00 5.63 693.39 316.81 1223.94 00:21:44.100 00:21:44.100 Initializing NVMe Controllers 00:21:44.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:44.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:44.100 Initialization complete. Launching workers. 00:21:44.100 ======================================================== 00:21:44.100 Latency(us) 00:21:44.100 Device Information : IOPS MiB/s Average min max 00:21:44.100 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1819.00 7.11 549.59 208.01 41792.92 00:21:44.100 ======================================================== 00:21:44.100 Total : 1819.00 7.11 549.59 208.01 41792.92 00:21:44.100 00:21:44.100 Initializing NVMe Controllers 00:21:44.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:44.100 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:44.100 Initialization complete. Launching workers. 00:21:44.100 ======================================================== 00:21:44.100 Latency(us) 00:21:44.100 Device Information : IOPS MiB/s Average min max 00:21:44.100 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1758.00 6.87 568.63 158.01 893.80 00:21:44.100 ======================================================== 00:21:44.100 Total : 1758.00 6.87 568.63 158.01 893.80 00:21:44.100 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1111044 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1111045 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:44.100 rmmod nvme_tcp 00:21:44.100 rmmod nvme_fabrics 00:21:44.100 rmmod nvme_keyring 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1110693 ']' 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1110693 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 1110693 ']' 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 1110693 00:21:44.100 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1110693 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1110693' 00:21:44.361 killing process with pid 1110693 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 1110693 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 1110693 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:44.361 11:45:09 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:46.909 00:21:46.909 real 0m12.440s 00:21:46.909 user 0m7.847s 00:21:46.909 sys 0m6.697s 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:46.909 ************************************ 00:21:46.909 END TEST nvmf_control_msg_list 00:21:46.909 ************************************ 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:46.909 11:45:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:46.909 ************************************ 00:21:46.909 START TEST nvmf_wait_for_buf 00:21:46.909 ************************************ 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:46.909 * Looking for test storage... 00:21:46.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.909 --rc genhtml_branch_coverage=1 00:21:46.909 --rc genhtml_function_coverage=1 00:21:46.909 --rc genhtml_legend=1 00:21:46.909 --rc geninfo_all_blocks=1 00:21:46.909 --rc geninfo_unexecuted_blocks=1 00:21:46.909 00:21:46.909 ' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.909 --rc genhtml_branch_coverage=1 00:21:46.909 --rc genhtml_function_coverage=1 00:21:46.909 --rc genhtml_legend=1 00:21:46.909 --rc geninfo_all_blocks=1 00:21:46.909 --rc geninfo_unexecuted_blocks=1 00:21:46.909 00:21:46.909 ' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.909 --rc genhtml_branch_coverage=1 00:21:46.909 --rc genhtml_function_coverage=1 00:21:46.909 --rc genhtml_legend=1 00:21:46.909 --rc geninfo_all_blocks=1 00:21:46.909 --rc geninfo_unexecuted_blocks=1 00:21:46.909 00:21:46.909 ' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:46.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:46.909 --rc genhtml_branch_coverage=1 00:21:46.909 --rc genhtml_function_coverage=1 00:21:46.909 --rc genhtml_legend=1 00:21:46.909 --rc geninfo_all_blocks=1 00:21:46.909 --rc geninfo_unexecuted_blocks=1 00:21:46.909 00:21:46.909 ' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.909 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:46.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:46.910 11:45:12 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.052 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:55.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:55.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:55.053 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:55.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:55.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.593 ms 00:21:55.053 00:21:55.053 --- 10.0.0.2 ping statistics --- 00:21:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.053 rtt min/avg/max/mdev = 0.593/0.593/0.593/0.000 ms 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:21:55.053 00:21:55.053 --- 10.0.0.1 ping statistics --- 00:21:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.053 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1115841 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1115841 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 1115841 ']' 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:55.053 11:45:19 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.054 [2024-11-15 11:45:19.846535] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:21:55.054 [2024-11-15 11:45:19.846605] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.054 [2024-11-15 11:45:19.946445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.054 [2024-11-15 11:45:19.999943] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.054 [2024-11-15 11:45:19.999993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.054 [2024-11-15 11:45:20.000002] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.054 [2024-11-15 11:45:20.000015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.054 [2024-11-15 11:45:20.000021] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.054 [2024-11-15 11:45:20.000784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.315 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.576 Malloc0 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.576 [2024-11-15 11:45:20.822871] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:55.576 [2024-11-15 11:45:20.859189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.576 11:45:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:55.576 [2024-11-15 11:45:20.960671] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:56.964 Initializing NVMe Controllers 00:21:56.964 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:56.964 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:56.964 Initialization complete. Launching workers. 00:21:56.964 ======================================================== 00:21:56.964 Latency(us) 00:21:56.964 Device Information : IOPS MiB/s Average min max 00:21:56.964 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 25.00 3.12 165994.97 47857.55 191556.35 00:21:56.964 ======================================================== 00:21:56.964 Total : 25.00 3.12 165994.97 47857.55 191556.35 00:21:56.964 00:21:56.964 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:56.964 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:56.964 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:56.964 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:56.964 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=374 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 374 -eq 0 ]] 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:57.225 rmmod nvme_tcp 00:21:57.225 rmmod nvme_fabrics 00:21:57.225 rmmod nvme_keyring 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1115841 ']' 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1115841 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 1115841 ']' 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 1115841 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1115841 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1115841' 00:21:57.225 killing process with pid 1115841 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 1115841 00:21:57.225 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 1115841 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.486 11:45:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.399 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:59.399 00:21:59.399 real 0m12.868s 00:21:59.399 user 0m5.266s 00:21:59.399 sys 0m6.180s 00:21:59.399 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:59.399 11:45:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:59.399 ************************************ 00:21:59.399 END TEST nvmf_wait_for_buf 00:21:59.399 ************************************ 00:21:59.659 11:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:59.659 11:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:59.659 11:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:59.659 11:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:59.659 11:45:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:59.659 11:45:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.798 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:07.799 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:07.799 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:07.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:07.799 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:07.799 ************************************ 00:22:07.799 START TEST nvmf_perf_adq 00:22:07.799 ************************************ 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:07.799 * Looking for test storage... 00:22:07.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.799 --rc genhtml_branch_coverage=1 00:22:07.799 --rc genhtml_function_coverage=1 00:22:07.799 --rc genhtml_legend=1 00:22:07.799 --rc geninfo_all_blocks=1 00:22:07.799 --rc geninfo_unexecuted_blocks=1 00:22:07.799 00:22:07.799 ' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.799 --rc genhtml_branch_coverage=1 00:22:07.799 --rc genhtml_function_coverage=1 00:22:07.799 --rc genhtml_legend=1 00:22:07.799 --rc geninfo_all_blocks=1 00:22:07.799 --rc geninfo_unexecuted_blocks=1 00:22:07.799 00:22:07.799 ' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.799 --rc genhtml_branch_coverage=1 00:22:07.799 --rc genhtml_function_coverage=1 00:22:07.799 --rc genhtml_legend=1 00:22:07.799 --rc geninfo_all_blocks=1 00:22:07.799 --rc geninfo_unexecuted_blocks=1 00:22:07.799 00:22:07.799 ' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:07.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:07.799 --rc genhtml_branch_coverage=1 00:22:07.799 --rc genhtml_function_coverage=1 00:22:07.799 --rc genhtml_legend=1 00:22:07.799 --rc geninfo_all_blocks=1 00:22:07.799 --rc geninfo_unexecuted_blocks=1 00:22:07.799 00:22:07.799 ' 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:07.799 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:07.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:07.800 11:45:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.390 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:14.391 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:14.391 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:14.391 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:14.391 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:14.391 11:45:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:15.777 11:45:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:17.691 11:45:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:22.987 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:22.988 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:22.988 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:22.988 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:22.988 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:22.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:22:22.988 00:22:22.988 --- 10.0.0.2 ping statistics --- 00:22:22.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.988 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:22:22.988 00:22:22.988 --- 10.0.0.1 ping statistics --- 00:22:22.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.988 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:22.988 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1126082 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1126082 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1126082 ']' 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.989 11:45:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:23.250 [2024-11-15 11:45:48.528441] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:22:23.251 [2024-11-15 11:45:48.528505] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:23.251 [2024-11-15 11:45:48.628899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.251 [2024-11-15 11:45:48.683660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.251 [2024-11-15 11:45:48.683711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.251 [2024-11-15 11:45:48.683719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.251 [2024-11-15 11:45:48.683726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.251 [2024-11-15 11:45:48.683733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.251 [2024-11-15 11:45:48.685831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.251 [2024-11-15 11:45:48.685996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.251 [2024-11-15 11:45:48.686158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.251 [2024-11-15 11:45:48.686158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 [2024-11-15 11:45:49.556050] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 Malloc1 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:24.194 [2024-11-15 11:45:49.633504] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1126430 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:22:24.194 11:45:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:22:26.741 "tick_rate": 2400000000, 00:22:26.741 "poll_groups": [ 00:22:26.741 { 00:22:26.741 "name": "nvmf_tgt_poll_group_000", 00:22:26.741 "admin_qpairs": 1, 00:22:26.741 "io_qpairs": 1, 00:22:26.741 "current_admin_qpairs": 1, 00:22:26.741 "current_io_qpairs": 1, 00:22:26.741 "pending_bdev_io": 0, 00:22:26.741 "completed_nvme_io": 18016, 00:22:26.741 "transports": [ 00:22:26.741 { 00:22:26.741 "trtype": "TCP" 00:22:26.741 } 00:22:26.741 ] 00:22:26.741 }, 00:22:26.741 { 00:22:26.741 "name": "nvmf_tgt_poll_group_001", 00:22:26.741 "admin_qpairs": 0, 00:22:26.741 "io_qpairs": 1, 00:22:26.741 "current_admin_qpairs": 0, 00:22:26.741 "current_io_qpairs": 1, 00:22:26.741 "pending_bdev_io": 0, 00:22:26.741 "completed_nvme_io": 16935, 00:22:26.741 "transports": [ 00:22:26.741 { 00:22:26.741 "trtype": "TCP" 00:22:26.741 } 00:22:26.741 ] 00:22:26.741 }, 00:22:26.741 { 00:22:26.741 "name": "nvmf_tgt_poll_group_002", 00:22:26.741 "admin_qpairs": 0, 00:22:26.741 "io_qpairs": 1, 00:22:26.741 "current_admin_qpairs": 0, 00:22:26.741 "current_io_qpairs": 1, 00:22:26.741 "pending_bdev_io": 0, 00:22:26.741 "completed_nvme_io": 19481, 00:22:26.741 "transports": [ 00:22:26.741 { 00:22:26.741 "trtype": "TCP" 00:22:26.741 } 00:22:26.741 ] 00:22:26.741 }, 00:22:26.741 { 00:22:26.741 "name": "nvmf_tgt_poll_group_003", 00:22:26.741 "admin_qpairs": 0, 00:22:26.741 "io_qpairs": 1, 00:22:26.741 "current_admin_qpairs": 0, 00:22:26.741 "current_io_qpairs": 1, 00:22:26.741 "pending_bdev_io": 0, 00:22:26.741 "completed_nvme_io": 16899, 00:22:26.741 "transports": [ 00:22:26.741 { 00:22:26.741 "trtype": "TCP" 00:22:26.741 } 00:22:26.741 ] 00:22:26.741 } 00:22:26.741 ] 00:22:26.741 }' 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:22:26.741 11:45:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1126430 00:22:34.875 Initializing NVMe Controllers 00:22:34.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:34.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:34.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:34.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:34.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:34.875 Initialization complete. Launching workers. 00:22:34.875 ======================================================== 00:22:34.875 Latency(us) 00:22:34.875 Device Information : IOPS MiB/s Average min max 00:22:34.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12704.10 49.63 5038.97 1445.30 10267.01 00:22:34.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13230.40 51.68 4838.06 1012.62 12649.34 00:22:34.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13809.10 53.94 4634.09 1205.19 12837.36 00:22:34.875 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13731.60 53.64 4661.20 1359.81 13207.38 00:22:34.875 ======================================================== 00:22:34.875 Total : 53475.20 208.89 4787.70 1012.62 13207.38 00:22:34.875 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:34.875 rmmod nvme_tcp 00:22:34.875 rmmod nvme_fabrics 00:22:34.875 rmmod nvme_keyring 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1126082 ']' 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1126082 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1126082 ']' 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1126082 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1126082 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1126082' 00:22:34.875 killing process with pid 1126082 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1126082 00:22:34.875 11:45:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1126082 00:22:34.875 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:34.875 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:34.875 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.876 11:46:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.786 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:36.786 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:36.786 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:36.786 11:46:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:38.700 11:46:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:40.614 11:46:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:45.906 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.906 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:45.907 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:45.907 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:45.907 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:45.907 11:46:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:45.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:45.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:22:45.907 00:22:45.907 --- 10.0.0.2 ping statistics --- 00:22:45.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.907 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:45.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:45.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:22:45.907 00:22:45.907 --- 10.0.0.1 ping statistics --- 00:22:45.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:45.907 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:45.907 net.core.busy_poll = 1 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:45.907 net.core.busy_read = 1 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:45.907 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:45.908 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.908 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:45.908 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1130918 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1130918 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 1130918 ']' 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:46.169 11:46:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.169 [2024-11-15 11:46:11.461138] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:22:46.169 [2024-11-15 11:46:11.461207] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.169 [2024-11-15 11:46:11.565033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:46.169 [2024-11-15 11:46:11.618218] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.169 [2024-11-15 11:46:11.618270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.169 [2024-11-15 11:46:11.618279] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.169 [2024-11-15 11:46:11.618287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.169 [2024-11-15 11:46:11.618293] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.169 [2024-11-15 11:46:11.620793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.169 [2024-11-15 11:46:11.620955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.169 [2024-11-15 11:46:11.621128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:46.169 [2024-11-15 11:46:11.621130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.113 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 [2024-11-15 11:46:12.487065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 Malloc1 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:47.114 [2024-11-15 11:46:12.564416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1131237 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:47.114 11:46:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:49.662 "tick_rate": 2400000000, 00:22:49.662 "poll_groups": [ 00:22:49.662 { 00:22:49.662 "name": "nvmf_tgt_poll_group_000", 00:22:49.662 "admin_qpairs": 1, 00:22:49.662 "io_qpairs": 4, 00:22:49.662 "current_admin_qpairs": 1, 00:22:49.662 "current_io_qpairs": 4, 00:22:49.662 "pending_bdev_io": 0, 00:22:49.662 "completed_nvme_io": 35866, 00:22:49.662 "transports": [ 00:22:49.662 { 00:22:49.662 "trtype": "TCP" 00:22:49.662 } 00:22:49.662 ] 00:22:49.662 }, 00:22:49.662 { 00:22:49.662 "name": "nvmf_tgt_poll_group_001", 00:22:49.662 "admin_qpairs": 0, 00:22:49.662 "io_qpairs": 0, 00:22:49.662 "current_admin_qpairs": 0, 00:22:49.662 "current_io_qpairs": 0, 00:22:49.662 "pending_bdev_io": 0, 00:22:49.662 "completed_nvme_io": 0, 00:22:49.662 "transports": [ 00:22:49.662 { 00:22:49.662 "trtype": "TCP" 00:22:49.662 } 00:22:49.662 ] 00:22:49.662 }, 00:22:49.662 { 00:22:49.662 "name": "nvmf_tgt_poll_group_002", 00:22:49.662 "admin_qpairs": 0, 00:22:49.662 "io_qpairs": 0, 00:22:49.662 "current_admin_qpairs": 0, 00:22:49.662 "current_io_qpairs": 0, 00:22:49.662 "pending_bdev_io": 0, 00:22:49.662 "completed_nvme_io": 0, 00:22:49.662 "transports": [ 00:22:49.662 { 00:22:49.662 "trtype": "TCP" 00:22:49.662 } 00:22:49.662 ] 00:22:49.662 }, 00:22:49.662 { 00:22:49.662 "name": "nvmf_tgt_poll_group_003", 00:22:49.662 "admin_qpairs": 0, 00:22:49.662 "io_qpairs": 0, 00:22:49.662 "current_admin_qpairs": 0, 00:22:49.662 "current_io_qpairs": 0, 00:22:49.662 "pending_bdev_io": 0, 00:22:49.662 "completed_nvme_io": 0, 00:22:49.662 "transports": [ 00:22:49.662 { 00:22:49.662 "trtype": "TCP" 00:22:49.662 } 00:22:49.662 ] 00:22:49.662 } 00:22:49.662 ] 00:22:49.662 }' 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:49.662 11:46:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1131237 00:22:57.801 Initializing NVMe Controllers 00:22:57.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:57.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:57.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:57.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:57.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:57.801 Initialization complete. Launching workers. 00:22:57.801 ======================================================== 00:22:57.801 Latency(us) 00:22:57.801 Device Information : IOPS MiB/s Average min max 00:22:57.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6284.60 24.55 10186.31 1245.07 57748.02 00:22:57.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6614.50 25.84 9676.77 1245.64 60589.00 00:22:57.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6500.30 25.39 9850.47 1186.51 61558.80 00:22:57.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5726.30 22.37 11223.07 1346.68 55872.02 00:22:57.801 ======================================================== 00:22:57.801 Total : 25125.70 98.15 10201.57 1186.51 61558.80 00:22:57.801 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.801 rmmod nvme_tcp 00:22:57.801 rmmod nvme_fabrics 00:22:57.801 rmmod nvme_keyring 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:57.801 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1130918 ']' 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1130918 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 1130918 ']' 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 1130918 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1130918 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1130918' 00:22:57.802 killing process with pid 1130918 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 1130918 00:22:57.802 11:46:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 1130918 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.802 11:46:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:01.107 00:23:01.107 real 0m54.009s 00:23:01.107 user 2m50.054s 00:23:01.107 sys 0m11.572s 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:01.107 ************************************ 00:23:01.107 END TEST nvmf_perf_adq 00:23:01.107 ************************************ 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:01.107 ************************************ 00:23:01.107 START TEST nvmf_shutdown 00:23:01.107 ************************************ 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:01.107 * Looking for test storage... 00:23:01.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:01.107 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.108 --rc genhtml_branch_coverage=1 00:23:01.108 --rc genhtml_function_coverage=1 00:23:01.108 --rc genhtml_legend=1 00:23:01.108 --rc geninfo_all_blocks=1 00:23:01.108 --rc geninfo_unexecuted_blocks=1 00:23:01.108 00:23:01.108 ' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.108 --rc genhtml_branch_coverage=1 00:23:01.108 --rc genhtml_function_coverage=1 00:23:01.108 --rc genhtml_legend=1 00:23:01.108 --rc geninfo_all_blocks=1 00:23:01.108 --rc geninfo_unexecuted_blocks=1 00:23:01.108 00:23:01.108 ' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.108 --rc genhtml_branch_coverage=1 00:23:01.108 --rc genhtml_function_coverage=1 00:23:01.108 --rc genhtml_legend=1 00:23:01.108 --rc geninfo_all_blocks=1 00:23:01.108 --rc geninfo_unexecuted_blocks=1 00:23:01.108 00:23:01.108 ' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:01.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:01.108 --rc genhtml_branch_coverage=1 00:23:01.108 --rc genhtml_function_coverage=1 00:23:01.108 --rc genhtml_legend=1 00:23:01.108 --rc geninfo_all_blocks=1 00:23:01.108 --rc geninfo_unexecuted_blocks=1 00:23:01.108 00:23:01.108 ' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:01.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:01.108 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:01.109 ************************************ 00:23:01.109 START TEST nvmf_shutdown_tc1 00:23:01.109 ************************************ 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:01.109 11:46:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:09.254 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:09.254 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:09.254 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:09.254 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.254 11:46:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:23:09.254 00:23:09.254 --- 10.0.0.2 ping statistics --- 00:23:09.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.254 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:23:09.254 00:23:09.254 --- 10.0.0.1 ping statistics --- 00:23:09.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.254 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1137731 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1137731 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1137731 ']' 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.254 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.255 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.255 11:46:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.255 [2024-11-15 11:46:34.198838] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:09.255 [2024-11-15 11:46:34.198905] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.255 [2024-11-15 11:46:34.299380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:09.255 [2024-11-15 11:46:34.351080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.255 [2024-11-15 11:46:34.351128] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.255 [2024-11-15 11:46:34.351136] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.255 [2024-11-15 11:46:34.351143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.255 [2024-11-15 11:46:34.351149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.255 [2024-11-15 11:46:34.353215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:09.255 [2024-11-15 11:46:34.353377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:09.255 [2024-11-15 11:46:34.353516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.255 [2024-11-15 11:46:34.353516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:09.517 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.517 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:23:09.517 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.517 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.517 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.779 [2024-11-15 11:46:35.058458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.779 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:09.779 Malloc1 00:23:09.779 [2024-11-15 11:46:35.183408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.779 Malloc2 00:23:09.779 Malloc3 00:23:10.041 Malloc4 00:23:10.041 Malloc5 00:23:10.041 Malloc6 00:23:10.041 Malloc7 00:23:10.041 Malloc8 00:23:10.041 Malloc9 00:23:10.303 Malloc10 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1138031 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1138031 /var/tmp/bdevperf.sock 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 1138031 ']' 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:10.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 [2024-11-15 11:46:35.699919] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:10.304 [2024-11-15 11:46:35.699993] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.304 "trtype": "$TEST_TRANSPORT", 00:23:10.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.304 "adrfam": "ipv4", 00:23:10.304 "trsvcid": "$NVMF_PORT", 00:23:10.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.304 "hdgst": ${hdgst:-false}, 00:23:10.304 "ddgst": ${ddgst:-false} 00:23:10.304 }, 00:23:10.304 "method": "bdev_nvme_attach_controller" 00:23:10.304 } 00:23:10.304 EOF 00:23:10.304 )") 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.304 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.304 { 00:23:10.304 "params": { 00:23:10.304 "name": "Nvme$subsystem", 00:23:10.305 "trtype": "$TEST_TRANSPORT", 00:23:10.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "$NVMF_PORT", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.305 "hdgst": ${hdgst:-false}, 00:23:10.305 "ddgst": ${ddgst:-false} 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 } 00:23:10.305 EOF 00:23:10.305 )") 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:10.305 { 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme$subsystem", 00:23:10.305 "trtype": "$TEST_TRANSPORT", 00:23:10.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "$NVMF_PORT", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:10.305 "hdgst": ${hdgst:-false}, 00:23:10.305 "ddgst": ${ddgst:-false} 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 } 00:23:10.305 EOF 00:23:10.305 )") 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:10.305 11:46:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme1", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme2", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme3", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme4", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme5", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme6", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme7", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme8", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme9", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 },{ 00:23:10.305 "params": { 00:23:10.305 "name": "Nvme10", 00:23:10.305 "trtype": "tcp", 00:23:10.305 "traddr": "10.0.0.2", 00:23:10.305 "adrfam": "ipv4", 00:23:10.305 "trsvcid": "4420", 00:23:10.305 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:10.305 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:10.305 "hdgst": false, 00:23:10.305 "ddgst": false 00:23:10.305 }, 00:23:10.305 "method": "bdev_nvme_attach_controller" 00:23:10.305 }' 00:23:10.305 [2024-11-15 11:46:35.797165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.566 [2024-11-15 11:46:35.852057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1138031 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:11.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1138031 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:11.951 11:46:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1137731 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 [2024-11-15 11:46:38.214634] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:12.895 [2024-11-15 11:46:38.214690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1138488 ] 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.895 "method": "bdev_nvme_attach_controller" 00:23:12.895 } 00:23:12.895 EOF 00:23:12.895 )") 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:12.895 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:12.895 { 00:23:12.895 "params": { 00:23:12.895 "name": "Nvme$subsystem", 00:23:12.895 "trtype": "$TEST_TRANSPORT", 00:23:12.895 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:12.895 "adrfam": "ipv4", 00:23:12.895 "trsvcid": "$NVMF_PORT", 00:23:12.895 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:12.895 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:12.895 "hdgst": ${hdgst:-false}, 00:23:12.895 "ddgst": ${ddgst:-false} 00:23:12.895 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 } 00:23:12.896 EOF 00:23:12.896 )") 00:23:12.896 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:23:12.896 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:23:12.896 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:23:12.896 11:46:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme1", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme2", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme3", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme4", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme5", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme6", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme7", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme8", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme9", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 },{ 00:23:12.896 "params": { 00:23:12.896 "name": "Nvme10", 00:23:12.896 "trtype": "tcp", 00:23:12.896 "traddr": "10.0.0.2", 00:23:12.896 "adrfam": "ipv4", 00:23:12.896 "trsvcid": "4420", 00:23:12.896 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:12.896 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:12.896 "hdgst": false, 00:23:12.896 "ddgst": false 00:23:12.896 }, 00:23:12.896 "method": "bdev_nvme_attach_controller" 00:23:12.896 }' 00:23:12.896 [2024-11-15 11:46:38.305648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:12.896 [2024-11-15 11:46:38.341326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.283 Running I/O for 1 seconds... 00:23:15.489 1860.00 IOPS, 116.25 MiB/s 00:23:15.489 Latency(us) 00:23:15.489 [2024-11-15T10:46:40.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.489 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme1n1 : 1.14 224.63 14.04 0.00 0.00 282088.32 13981.01 256901.12 00:23:15.489 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme2n1 : 1.15 223.26 13.95 0.00 0.00 278612.48 19988.48 251658.24 00:23:15.489 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme3n1 : 1.12 227.91 14.24 0.00 0.00 264266.45 22063.79 267386.88 00:23:15.489 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme4n1 : 1.10 231.73 14.48 0.00 0.00 258242.56 22173.01 239424.85 00:23:15.489 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme5n1 : 1.15 223.06 13.94 0.00 0.00 264408.32 14636.37 255153.49 00:23:15.489 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme6n1 : 1.13 246.90 15.43 0.00 0.00 229065.29 14745.60 235929.60 00:23:15.489 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme7n1 : 1.19 269.88 16.87 0.00 0.00 210682.54 12779.52 246415.36 00:23:15.489 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme8n1 : 1.17 273.11 17.07 0.00 0.00 204902.74 9830.40 263891.63 00:23:15.489 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme9n1 : 1.18 217.75 13.61 0.00 0.00 252362.24 16820.91 276125.01 00:23:15.489 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:15.489 Verification LBA range: start 0x0 length 0x400 00:23:15.489 Nvme10n1 : 1.20 267.27 16.70 0.00 0.00 202231.81 9557.33 253405.87 00:23:15.489 [2024-11-15T10:46:40.987Z] =================================================================================================================== 00:23:15.489 [2024-11-15T10:46:40.987Z] Total : 2405.50 150.34 0.00 0.00 241871.30 9557.33 276125.01 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:15.489 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:15.490 rmmod nvme_tcp 00:23:15.490 rmmod nvme_fabrics 00:23:15.490 rmmod nvme_keyring 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1137731 ']' 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1137731 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 1137731 ']' 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 1137731 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:15.490 11:46:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1137731 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1137731' 00:23:15.749 killing process with pid 1137731 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 1137731 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 1137731 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:15.749 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:15.750 11:46:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:18.387 00:23:18.387 real 0m16.807s 00:23:18.387 user 0m33.400s 00:23:18.387 sys 0m7.036s 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:18.387 ************************************ 00:23:18.387 END TEST nvmf_shutdown_tc1 00:23:18.387 ************************************ 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:18.387 ************************************ 00:23:18.387 START TEST nvmf_shutdown_tc2 00:23:18.387 ************************************ 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.387 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.387 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.387 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.387 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.388 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:23:18.388 00:23:18.388 --- 10.0.0.2 ping statistics --- 00:23:18.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.388 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:23:18.388 00:23:18.388 --- 10.0.0.1 ping statistics --- 00:23:18.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.388 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1139610 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1139610 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1139610 ']' 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.388 11:46:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:18.388 [2024-11-15 11:46:43.860169] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:18.388 [2024-11-15 11:46:43.860242] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.650 [2024-11-15 11:46:43.956235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:18.650 [2024-11-15 11:46:43.991261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.650 [2024-11-15 11:46:43.991290] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.650 [2024-11-15 11:46:43.991297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.650 [2024-11-15 11:46:43.991301] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.650 [2024-11-15 11:46:43.991306] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.650 [2024-11-15 11:46:43.992608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.650 [2024-11-15 11:46:43.992821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.650 [2024-11-15 11:46:43.993107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:18.650 [2024-11-15 11:46:43.993526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.221 [2024-11-15 11:46:44.705366] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.221 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:19.222 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:19.222 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:19.222 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.483 11:46:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.483 Malloc1 00:23:19.483 [2024-11-15 11:46:44.816451] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.483 Malloc2 00:23:19.483 Malloc3 00:23:19.483 Malloc4 00:23:19.483 Malloc5 00:23:19.744 Malloc6 00:23:19.744 Malloc7 00:23:19.744 Malloc8 00:23:19.744 Malloc9 00:23:19.744 Malloc10 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1139994 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1139994 /var/tmp/bdevperf.sock 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1139994 ']' 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.744 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.744 { 00:23:19.744 "params": { 00:23:19.744 "name": "Nvme$subsystem", 00:23:19.744 "trtype": "$TEST_TRANSPORT", 00:23:19.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.744 "adrfam": "ipv4", 00:23:19.744 "trsvcid": "$NVMF_PORT", 00:23:19.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.745 "hdgst": ${hdgst:-false}, 00:23:19.745 "ddgst": ${ddgst:-false} 00:23:19.745 }, 00:23:19.745 "method": "bdev_nvme_attach_controller" 00:23:19.745 } 00:23:19.745 EOF 00:23:19.745 )") 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.745 { 00:23:19.745 "params": { 00:23:19.745 "name": "Nvme$subsystem", 00:23:19.745 "trtype": "$TEST_TRANSPORT", 00:23:19.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.745 "adrfam": "ipv4", 00:23:19.745 "trsvcid": "$NVMF_PORT", 00:23:19.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.745 "hdgst": ${hdgst:-false}, 00:23:19.745 "ddgst": ${ddgst:-false} 00:23:19.745 }, 00:23:19.745 "method": "bdev_nvme_attach_controller" 00:23:19.745 } 00:23:19.745 EOF 00:23:19.745 )") 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.745 { 00:23:19.745 "params": { 00:23:19.745 "name": "Nvme$subsystem", 00:23:19.745 "trtype": "$TEST_TRANSPORT", 00:23:19.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.745 "adrfam": "ipv4", 00:23:19.745 "trsvcid": "$NVMF_PORT", 00:23:19.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.745 "hdgst": ${hdgst:-false}, 00:23:19.745 "ddgst": ${ddgst:-false} 00:23:19.745 }, 00:23:19.745 "method": "bdev_nvme_attach_controller" 00:23:19.745 } 00:23:19.745 EOF 00:23:19.745 )") 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:19.745 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:19.745 { 00:23:19.745 "params": { 00:23:19.745 "name": "Nvme$subsystem", 00:23:19.745 "trtype": "$TEST_TRANSPORT", 00:23:19.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:19.745 "adrfam": "ipv4", 00:23:19.745 "trsvcid": "$NVMF_PORT", 00:23:19.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:19.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:19.745 "hdgst": ${hdgst:-false}, 00:23:19.745 "ddgst": ${ddgst:-false} 00:23:19.745 }, 00:23:19.745 "method": "bdev_nvme_attach_controller" 00:23:19.745 } 00:23:19.745 EOF 00:23:19.745 )") 00:23:20.006 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.006 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:20.006 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:20.006 { 00:23:20.006 "params": { 00:23:20.006 "name": "Nvme$subsystem", 00:23:20.006 "trtype": "$TEST_TRANSPORT", 00:23:20.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.006 "adrfam": "ipv4", 00:23:20.006 "trsvcid": "$NVMF_PORT", 00:23:20.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.006 "hdgst": ${hdgst:-false}, 00:23:20.006 "ddgst": ${ddgst:-false} 00:23:20.006 }, 00:23:20.006 "method": "bdev_nvme_attach_controller" 00:23:20.006 } 00:23:20.006 EOF 00:23:20.006 )") 00:23:20.006 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.006 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:20.006 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:20.006 { 00:23:20.006 "params": { 00:23:20.006 "name": "Nvme$subsystem", 00:23:20.006 "trtype": "$TEST_TRANSPORT", 00:23:20.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.006 "adrfam": "ipv4", 00:23:20.006 "trsvcid": "$NVMF_PORT", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.007 "hdgst": ${hdgst:-false}, 00:23:20.007 "ddgst": ${ddgst:-false} 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 } 00:23:20.007 EOF 00:23:20.007 )") 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.007 [2024-11-15 11:46:45.261685] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:20.007 [2024-11-15 11:46:45.261740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1139994 ] 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:20.007 { 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme$subsystem", 00:23:20.007 "trtype": "$TEST_TRANSPORT", 00:23:20.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "$NVMF_PORT", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.007 "hdgst": ${hdgst:-false}, 00:23:20.007 "ddgst": ${ddgst:-false} 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 } 00:23:20.007 EOF 00:23:20.007 )") 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:20.007 { 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme$subsystem", 00:23:20.007 "trtype": "$TEST_TRANSPORT", 00:23:20.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "$NVMF_PORT", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.007 "hdgst": ${hdgst:-false}, 00:23:20.007 "ddgst": ${ddgst:-false} 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 } 00:23:20.007 EOF 00:23:20.007 )") 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:20.007 { 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme$subsystem", 00:23:20.007 "trtype": "$TEST_TRANSPORT", 00:23:20.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "$NVMF_PORT", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.007 "hdgst": ${hdgst:-false}, 00:23:20.007 "ddgst": ${ddgst:-false} 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 } 00:23:20.007 EOF 00:23:20.007 )") 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:20.007 { 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme$subsystem", 00:23:20.007 "trtype": "$TEST_TRANSPORT", 00:23:20.007 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "$NVMF_PORT", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:20.007 "hdgst": ${hdgst:-false}, 00:23:20.007 "ddgst": ${ddgst:-false} 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 } 00:23:20.007 EOF 00:23:20.007 )") 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:23:20.007 11:46:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme1", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme2", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme3", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme4", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme5", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme6", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme7", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme8", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme9", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:20.007 "hdgst": false, 00:23:20.007 "ddgst": false 00:23:20.007 }, 00:23:20.007 "method": "bdev_nvme_attach_controller" 00:23:20.007 },{ 00:23:20.007 "params": { 00:23:20.007 "name": "Nvme10", 00:23:20.007 "trtype": "tcp", 00:23:20.007 "traddr": "10.0.0.2", 00:23:20.007 "adrfam": "ipv4", 00:23:20.007 "trsvcid": "4420", 00:23:20.007 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:20.007 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:20.008 "hdgst": false, 00:23:20.008 "ddgst": false 00:23:20.008 }, 00:23:20.008 "method": "bdev_nvme_attach_controller" 00:23:20.008 }' 00:23:20.008 [2024-11-15 11:46:45.353292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.008 [2024-11-15 11:46:45.390043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.923 Running I/O for 10 seconds... 00:23:21.923 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:21.923 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:23:21.923 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:21.923 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.923 11:46:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:21.923 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:22.184 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1139994 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1139994 ']' 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1139994 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1139994 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1139994' 00:23:22.445 killing process with pid 1139994 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1139994 00:23:22.445 11:46:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1139994 00:23:22.706 2184.00 IOPS, 136.50 MiB/s [2024-11-15T10:46:48.204Z] Received shutdown signal, test time was about 1.027300 seconds 00:23:22.706 00:23:22.706 Latency(us) 00:23:22.706 [2024-11-15T10:46:48.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.706 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme1n1 : 1.02 250.20 15.64 0.00 0.00 252678.61 13981.01 249910.61 00:23:22.706 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme2n1 : 1.03 249.41 15.59 0.00 0.00 248712.53 19660.80 258648.75 00:23:22.706 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme3n1 : 1.02 250.90 15.68 0.00 0.00 242238.51 22173.01 251658.24 00:23:22.706 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme4n1 : 1.00 256.83 16.05 0.00 0.00 232157.01 18896.21 251658.24 00:23:22.706 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme5n1 : 0.99 193.15 12.07 0.00 0.00 302363.59 27306.67 269134.51 00:23:22.706 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme6n1 : 1.00 256.27 16.02 0.00 0.00 223325.87 21517.65 234181.97 00:23:22.706 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme7n1 : 0.99 259.09 16.19 0.00 0.00 215856.21 16165.55 248162.99 00:23:22.706 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme8n1 : 0.99 261.87 16.37 0.00 0.00 207465.88 6171.31 253405.87 00:23:22.706 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme9n1 : 0.98 194.94 12.18 0.00 0.00 273974.33 19005.44 251658.24 00:23:22.706 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:22.706 Verification LBA range: start 0x0 length 0x400 00:23:22.706 Nvme10n1 : 1.02 192.88 12.06 0.00 0.00 258015.15 8574.29 248162.99 00:23:22.706 [2024-11-15T10:46:48.204Z] =================================================================================================================== 00:23:22.706 [2024-11-15T10:46:48.204Z] Total : 2365.55 147.85 0.00 0.00 243035.15 6171.31 269134.51 00:23:22.706 11:46:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1139610 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:23.650 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:23.650 rmmod nvme_tcp 00:23:23.912 rmmod nvme_fabrics 00:23:23.912 rmmod nvme_keyring 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1139610 ']' 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1139610 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 1139610 ']' 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 1139610 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1139610 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1139610' 00:23:23.912 killing process with pid 1139610 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 1139610 00:23:23.912 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 1139610 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.174 11:46:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.092 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:26.092 00:23:26.092 real 0m8.164s 00:23:26.092 user 0m25.026s 00:23:26.092 sys 0m1.324s 00:23:26.092 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:26.092 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:26.092 ************************************ 00:23:26.092 END TEST nvmf_shutdown_tc2 00:23:26.092 ************************************ 00:23:26.355 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:26.355 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:26.355 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:26.355 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:26.355 ************************************ 00:23:26.356 START TEST nvmf_shutdown_tc3 00:23:26.356 ************************************ 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:26.356 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:26.356 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:26.356 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:26.356 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:26.356 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:26.357 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:26.619 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:26.619 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:26.619 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:26.619 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:26.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:26.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.651 ms 00:23:26.620 00:23:26.620 --- 10.0.0.2 ping statistics --- 00:23:26.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.620 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:26.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:26.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.328 ms 00:23:26.620 00:23:26.620 --- 10.0.0.1 ping statistics --- 00:23:26.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:26.620 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:26.620 11:46:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1141452 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1141452 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1141452 ']' 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:26.620 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:26.620 [2024-11-15 11:46:52.086052] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:26.620 [2024-11-15 11:46:52.086113] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:26.882 [2024-11-15 11:46:52.180764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:26.882 [2024-11-15 11:46:52.221304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:26.882 [2024-11-15 11:46:52.221338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:26.882 [2024-11-15 11:46:52.221344] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:26.882 [2024-11-15 11:46:52.221350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:26.882 [2024-11-15 11:46:52.221354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:26.882 [2024-11-15 11:46:52.222870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.882 [2024-11-15 11:46:52.223035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:26.882 [2024-11-15 11:46:52.223187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.882 [2024-11-15 11:46:52.223189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.453 [2024-11-15 11:46:52.917222] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.453 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.714 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.714 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.714 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.714 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.714 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.715 11:46:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.715 Malloc1 00:23:27.715 [2024-11-15 11:46:53.026137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.715 Malloc2 00:23:27.715 Malloc3 00:23:27.715 Malloc4 00:23:27.715 Malloc5 00:23:27.715 Malloc6 00:23:27.976 Malloc7 00:23:27.976 Malloc8 00:23:27.976 Malloc9 00:23:27.976 Malloc10 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1141838 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1141838 /var/tmp/bdevperf.sock 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 1141838 ']' 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.976 { 00:23:27.976 "params": { 00:23:27.976 "name": "Nvme$subsystem", 00:23:27.976 "trtype": "$TEST_TRANSPORT", 00:23:27.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.976 "adrfam": "ipv4", 00:23:27.976 "trsvcid": "$NVMF_PORT", 00:23:27.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.976 "hdgst": ${hdgst:-false}, 00:23:27.976 "ddgst": ${ddgst:-false} 00:23:27.976 }, 00:23:27.976 "method": "bdev_nvme_attach_controller" 00:23:27.976 } 00:23:27.976 EOF 00:23:27.976 )") 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.976 { 00:23:27.976 "params": { 00:23:27.976 "name": "Nvme$subsystem", 00:23:27.976 "trtype": "$TEST_TRANSPORT", 00:23:27.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.976 "adrfam": "ipv4", 00:23:27.976 "trsvcid": "$NVMF_PORT", 00:23:27.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.976 "hdgst": ${hdgst:-false}, 00:23:27.976 "ddgst": ${ddgst:-false} 00:23:27.976 }, 00:23:27.976 "method": "bdev_nvme_attach_controller" 00:23:27.976 } 00:23:27.976 EOF 00:23:27.976 )") 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.976 { 00:23:27.976 "params": { 00:23:27.976 "name": "Nvme$subsystem", 00:23:27.976 "trtype": "$TEST_TRANSPORT", 00:23:27.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.976 "adrfam": "ipv4", 00:23:27.976 "trsvcid": "$NVMF_PORT", 00:23:27.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.976 "hdgst": ${hdgst:-false}, 00:23:27.976 "ddgst": ${ddgst:-false} 00:23:27.976 }, 00:23:27.976 "method": "bdev_nvme_attach_controller" 00:23:27.976 } 00:23:27.976 EOF 00:23:27.976 )") 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.976 { 00:23:27.976 "params": { 00:23:27.976 "name": "Nvme$subsystem", 00:23:27.976 "trtype": "$TEST_TRANSPORT", 00:23:27.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.976 "adrfam": "ipv4", 00:23:27.976 "trsvcid": "$NVMF_PORT", 00:23:27.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.976 "hdgst": ${hdgst:-false}, 00:23:27.976 "ddgst": ${ddgst:-false} 00:23:27.976 }, 00:23:27.976 "method": "bdev_nvme_attach_controller" 00:23:27.976 } 00:23:27.976 EOF 00:23:27.976 )") 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.976 { 00:23:27.976 "params": { 00:23:27.976 "name": "Nvme$subsystem", 00:23:27.976 "trtype": "$TEST_TRANSPORT", 00:23:27.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.976 "adrfam": "ipv4", 00:23:27.976 "trsvcid": "$NVMF_PORT", 00:23:27.976 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.976 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.976 "hdgst": ${hdgst:-false}, 00:23:27.976 "ddgst": ${ddgst:-false} 00:23:27.976 }, 00:23:27.976 "method": "bdev_nvme_attach_controller" 00:23:27.976 } 00:23:27.976 EOF 00:23:27.976 )") 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:27.976 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:27.976 { 00:23:27.976 "params": { 00:23:27.976 "name": "Nvme$subsystem", 00:23:27.976 "trtype": "$TEST_TRANSPORT", 00:23:27.976 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:27.976 "adrfam": "ipv4", 00:23:27.977 "trsvcid": "$NVMF_PORT", 00:23:27.977 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:27.977 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:27.977 "hdgst": ${hdgst:-false}, 00:23:27.977 "ddgst": ${ddgst:-false} 00:23:27.977 }, 00:23:27.977 "method": "bdev_nvme_attach_controller" 00:23:27.977 } 00:23:27.977 EOF 00:23:27.977 )") 00:23:27.977 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.238 { 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme$subsystem", 00:23:28.238 "trtype": "$TEST_TRANSPORT", 00:23:28.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "$NVMF_PORT", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.238 "hdgst": ${hdgst:-false}, 00:23:28.238 "ddgst": ${ddgst:-false} 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 } 00:23:28.238 EOF 00:23:28.238 )") 00:23:28.238 [2024-11-15 11:46:53.477842] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:28.238 [2024-11-15 11:46:53.477897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1141838 ] 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.238 { 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme$subsystem", 00:23:28.238 "trtype": "$TEST_TRANSPORT", 00:23:28.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "$NVMF_PORT", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.238 "hdgst": ${hdgst:-false}, 00:23:28.238 "ddgst": ${ddgst:-false} 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 } 00:23:28.238 EOF 00:23:28.238 )") 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.238 { 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme$subsystem", 00:23:28.238 "trtype": "$TEST_TRANSPORT", 00:23:28.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "$NVMF_PORT", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.238 "hdgst": ${hdgst:-false}, 00:23:28.238 "ddgst": ${ddgst:-false} 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 } 00:23:28.238 EOF 00:23:28.238 )") 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:28.238 { 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme$subsystem", 00:23:28.238 "trtype": "$TEST_TRANSPORT", 00:23:28.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "$NVMF_PORT", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:28.238 "hdgst": ${hdgst:-false}, 00:23:28.238 "ddgst": ${ddgst:-false} 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 } 00:23:28.238 EOF 00:23:28.238 )") 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:23:28.238 11:46:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme1", 00:23:28.238 "trtype": "tcp", 00:23:28.238 "traddr": "10.0.0.2", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "4420", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:28.238 "hdgst": false, 00:23:28.238 "ddgst": false 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 },{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme2", 00:23:28.238 "trtype": "tcp", 00:23:28.238 "traddr": "10.0.0.2", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "4420", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:28.238 "hdgst": false, 00:23:28.238 "ddgst": false 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 },{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme3", 00:23:28.238 "trtype": "tcp", 00:23:28.238 "traddr": "10.0.0.2", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "4420", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:28.238 "hdgst": false, 00:23:28.238 "ddgst": false 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 },{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme4", 00:23:28.238 "trtype": "tcp", 00:23:28.238 "traddr": "10.0.0.2", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "4420", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:28.238 "hdgst": false, 00:23:28.238 "ddgst": false 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 },{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme5", 00:23:28.238 "trtype": "tcp", 00:23:28.238 "traddr": "10.0.0.2", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "4420", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:28.238 "hdgst": false, 00:23:28.238 "ddgst": false 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 },{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme6", 00:23:28.238 "trtype": "tcp", 00:23:28.238 "traddr": "10.0.0.2", 00:23:28.238 "adrfam": "ipv4", 00:23:28.238 "trsvcid": "4420", 00:23:28.238 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:28.238 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:28.238 "hdgst": false, 00:23:28.238 "ddgst": false 00:23:28.238 }, 00:23:28.238 "method": "bdev_nvme_attach_controller" 00:23:28.238 },{ 00:23:28.238 "params": { 00:23:28.238 "name": "Nvme7", 00:23:28.238 "trtype": "tcp", 00:23:28.239 "traddr": "10.0.0.2", 00:23:28.239 "adrfam": "ipv4", 00:23:28.239 "trsvcid": "4420", 00:23:28.239 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:28.239 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:28.239 "hdgst": false, 00:23:28.239 "ddgst": false 00:23:28.239 }, 00:23:28.239 "method": "bdev_nvme_attach_controller" 00:23:28.239 },{ 00:23:28.239 "params": { 00:23:28.239 "name": "Nvme8", 00:23:28.239 "trtype": "tcp", 00:23:28.239 "traddr": "10.0.0.2", 00:23:28.239 "adrfam": "ipv4", 00:23:28.239 "trsvcid": "4420", 00:23:28.239 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:28.239 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:28.239 "hdgst": false, 00:23:28.239 "ddgst": false 00:23:28.239 }, 00:23:28.239 "method": "bdev_nvme_attach_controller" 00:23:28.239 },{ 00:23:28.239 "params": { 00:23:28.239 "name": "Nvme9", 00:23:28.239 "trtype": "tcp", 00:23:28.239 "traddr": "10.0.0.2", 00:23:28.239 "adrfam": "ipv4", 00:23:28.239 "trsvcid": "4420", 00:23:28.239 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:28.239 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:28.239 "hdgst": false, 00:23:28.239 "ddgst": false 00:23:28.239 }, 00:23:28.239 "method": "bdev_nvme_attach_controller" 00:23:28.239 },{ 00:23:28.239 "params": { 00:23:28.239 "name": "Nvme10", 00:23:28.239 "trtype": "tcp", 00:23:28.239 "traddr": "10.0.0.2", 00:23:28.239 "adrfam": "ipv4", 00:23:28.239 "trsvcid": "4420", 00:23:28.239 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:28.239 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:28.239 "hdgst": false, 00:23:28.239 "ddgst": false 00:23:28.239 }, 00:23:28.239 "method": "bdev_nvme_attach_controller" 00:23:28.239 }' 00:23:28.239 [2024-11-15 11:46:53.565271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.239 [2024-11-15 11:46:53.602188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.155 Running I/O for 10 seconds... 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:23:30.155 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:23:30.417 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:30.693 11:46:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1141452 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1141452 ']' 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1141452 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1141452 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1141452' 00:23:30.693 killing process with pid 1141452 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 1141452 00:23:30.693 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 1141452 00:23:30.693 [2024-11-15 11:46:56.083616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583b0 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.083666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583b0 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.083672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583b0 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.083678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe583b0 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.693 [2024-11-15 11:46:56.084414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.084523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5af80 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085444] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.694 [2024-11-15 11:46:56.085695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.085701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.085705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.085710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe588a0 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.086997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.087258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe58d70 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.695 [2024-11-15 11:46:56.088455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.088681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59730 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089552] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.696 [2024-11-15 11:46:56.089676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.089791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe59c00 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.697 [2024-11-15 11:46:56.090846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.090912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a0d0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091601] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.091688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.098859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.098896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.098907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.098916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.098924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.098933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.098941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.098948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.098956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138e00 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.098987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.098996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117bc90 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.099084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eba0 is same with the state(6) to be set 00:23:30.698 [2024-11-15 11:46:56.099178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.698 [2024-11-15 11:46:56.099196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.698 [2024-11-15 11:46:56.099204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0bfc0 is same with the state(6) to be set 00:23:30.699 [2024-11-15 11:46:56.099267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0dcb0 is same with the state(6) to be set 00:23:30.699 [2024-11-15 11:46:56.099355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0b790 is same with the state(6) to be set 00:23:30.699 [2024-11-15 11:46:56.099441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1165450 is same with the state(6) to be set 00:23:30.699 [2024-11-15 11:46:56.099531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc25610 is same with the state(6) to be set 00:23:30.699 [2024-11-15 11:46:56.099629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.699 [2024-11-15 11:46:56.099684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.099691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0d850 is same with the state(6) to be set 00:23:30.699 [2024-11-15 11:46:56.100031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.699 [2024-11-15 11:46:56.100293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.699 [2024-11-15 11:46:56.100300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.100983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.700 [2024-11-15 11:46:56.100993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.700 [2024-11-15 11:46:56.101000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:46:56.101583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 he state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:1[2024-11-15 11:46:56.101607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 he state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:1[2024-11-15 11:46:56.101629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 he state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with t[2024-11-15 11:46:56.101657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:23:30.701 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:46:56.101679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 he state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:46:56.101698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 he state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with t[2024-11-15 11:46:56.101709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1he state(6) to be set 00:23:30.701 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-15 11:46:56.101740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 he state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5a5c0 is same with the state(6) to be set 00:23:30.701 [2024-11-15 11:46:56.101767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.701 [2024-11-15 11:46:56.101843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.701 [2024-11-15 11:46:56.101852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.101983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.101993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.702 [2024-11-15 11:46:56.102121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.702 [2024-11-15 11:46:56.102349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102379] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.702 [2024-11-15 11:46:56.102530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102559] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.102664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5aa90 is same with the state(6) to be set 00:23:30.703 [2024-11-15 11:46:56.111584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.111982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.111992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.112003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.112013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.112023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.112031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.112041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.112049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.112058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.703 [2024-11-15 11:46:56.112067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.703 [2024-11-15 11:46:56.112076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.112084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.112094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.112102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.112111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.112119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.112128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.112136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.113769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:30.704 [2024-11-15 11:46:56.113809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0bfc0 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.113841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1138e00 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.113862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117bc90 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.113903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.704 [2024-11-15 11:46:56.113914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.113924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.704 [2024-11-15 11:46:56.113931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.113940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.704 [2024-11-15 11:46:56.113948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.113956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.704 [2024-11-15 11:46:56.113964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.113972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1167a20 is same with the state(6) to be set 00:23:30.704 [2024-11-15 11:46:56.113989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112eba0 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.114010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0dcb0 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.114025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0b790 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.114042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165450 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.114066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc25610 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.114086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d850 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.115620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:30.704 [2024-11-15 11:46:56.115705] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.115749] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.116057] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.116097] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.116135] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.116246] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.116775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.704 [2024-11-15 11:46:56.116814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0bfc0 with addr=10.0.0.2, port=4420 00:23:30.704 [2024-11-15 11:46:56.116828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0bfc0 is same with the state(6) to be set 00:23:30.704 [2024-11-15 11:46:56.116906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.704 [2024-11-15 11:46:56.116918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0dcb0 with addr=10.0.0.2, port=4420 00:23:30.704 [2024-11-15 11:46:56.116927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0dcb0 is same with the state(6) to be set 00:23:30.704 [2024-11-15 11:46:56.117358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0bfc0 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.117382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0dcb0 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.117446] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.117483] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:30.704 [2024-11-15 11:46:56.117500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:30.704 [2024-11-15 11:46:56.117508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:30.704 [2024-11-15 11:46:56.117518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:30.704 [2024-11-15 11:46:56.117528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:30.704 [2024-11-15 11:46:56.117537] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:30.704 [2024-11-15 11:46:56.117544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:30.704 [2024-11-15 11:46:56.117552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:30.704 [2024-11-15 11:46:56.117558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:30.704 [2024-11-15 11:46:56.123817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1167a20 (9): Bad file descriptor 00:23:30.704 [2024-11-15 11:46:56.123971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.123985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.704 [2024-11-15 11:46:56.124274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.704 [2024-11-15 11:46:56.124283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.705 [2024-11-15 11:46:56.124985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.705 [2024-11-15 11:46:56.124994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.125125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.125135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf128c0 is same with the state(6) to be set 00:23:30.706 [2024-11-15 11:46:56.126429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.126985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.126993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.127004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.127012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.706 [2024-11-15 11:46:56.127023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.706 [2024-11-15 11:46:56.127031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.127615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.127623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf13a20 is same with the state(6) to be set 00:23:30.707 [2024-11-15 11:46:56.128912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.128928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.128939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.128947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.707 [2024-11-15 11:46:56.128957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.707 [2024-11-15 11:46:56.128966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.128975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.128983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.128996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.708 [2024-11-15 11:46:56.129682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.708 [2024-11-15 11:46:56.129690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.129988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.129996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.130006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.130013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.130023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.130031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.130041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.130048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.130058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.130065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.130074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110fd00 is same with the state(6) to be set 00:23:30.709 [2024-11-15 11:46:56.131352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.709 [2024-11-15 11:46:56.131704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.709 [2024-11-15 11:46:56.131715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.131986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.131996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.710 [2024-11-15 11:46:56.132454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.710 [2024-11-15 11:46:56.132465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.132474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.132484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.132493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.132503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.132511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.132522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.132530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.132540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.132548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.132558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.132573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.132582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1111020 is same with the state(6) to be set 00:23:30.711 [2024-11-15 11:46:56.133863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.133892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.133910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.133928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.133945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.133963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.133982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.133990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.711 [2024-11-15 11:46:56.134411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.711 [2024-11-15 11:46:56.134419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.134984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.134994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.135003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.135012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.135021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.135032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.135039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.135052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.135060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.135070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1112340 is same with the state(6) to be set 00:23:30.712 [2024-11-15 11:46:56.136335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.136350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.136363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.136373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.136384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.136392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.136403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.136411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.136421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.712 [2024-11-15 11:46:56.136429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.712 [2024-11-15 11:46:56.136439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.136985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.136996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.713 [2024-11-15 11:46:56.137181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.713 [2024-11-15 11:46:56.137189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.137543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.137551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1113660 is same with the state(6) to be set 00:23:30.714 [2024-11-15 11:46:56.138849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.138864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.138879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.138890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.138902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.138911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.138926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.138937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.138950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.138960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.138971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.138981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.138993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.714 [2024-11-15 11:46:56.139229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.714 [2024-11-15 11:46:56.139240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.715 [2024-11-15 11:46:56.139954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.715 [2024-11-15 11:46:56.139962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.139972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.139982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.139992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.140000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.140010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.140017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.140027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.140034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.140044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.140052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.140060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf4e0c0 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.141326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.141343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.141357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.141369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.141441] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:23:30.716 [2024-11-15 11:46:56.141465] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:23:30.716 [2024-11-15 11:46:56.141476] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:23:30.716 [2024-11-15 11:46:56.141567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.141582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.141594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.142023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.142039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0d850 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.142048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0d850 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.142330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.142341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0b790 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.142349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0b790 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.142808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.142855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1138e00 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.142867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138e00 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.143192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.143205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112eba0 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.143213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eba0 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.145086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.145106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:23:30.716 [2024-11-15 11:46:56.145441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.145455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc25610 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.145463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc25610 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.145646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.145659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117bc90 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.145666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117bc90 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.145972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.716 [2024-11-15 11:46:56.145984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1165450 with addr=10.0.0.2, port=4420 00:23:30.716 [2024-11-15 11:46:56.145992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1165450 is same with the state(6) to be set 00:23:30.716 [2024-11-15 11:46:56.146004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d850 (9): Bad file descriptor 00:23:30.716 [2024-11-15 11:46:56.146015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0b790 (9): Bad file descriptor 00:23:30.716 [2024-11-15 11:46:56.146025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1138e00 (9): Bad file descriptor 00:23:30.716 [2024-11-15 11:46:56.146034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112eba0 (9): Bad file descriptor 00:23:30.716 [2024-11-15 11:46:56.146122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.716 [2024-11-15 11:46:56.146373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.716 [2024-11-15 11:46:56.146381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.146988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.146996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.717 [2024-11-15 11:46:56.147115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.717 [2024-11-15 11:46:56.147123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.718 [2024-11-15 11:46:56.147306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.718 [2024-11-15 11:46:56.147315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1114930 is same with the state(6) to be set 00:23:30.718 task offset: 31104 on job bdev=Nvme4n1 fails 00:23:30.718 00:23:30.718 Latency(us) 00:23:30.718 [2024-11-15T10:46:56.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.718 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme1n1 ended in about 0.96 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme1n1 : 0.96 200.94 12.56 66.98 0.00 236148.05 16056.32 265639.25 00:23:30.718 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme2n1 ended in about 0.97 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme2n1 : 0.97 132.43 8.28 66.21 0.00 312268.23 32768.00 290106.03 00:23:30.718 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme3n1 ended in about 0.97 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme3n1 : 0.97 198.13 12.38 66.04 0.00 230007.89 14199.47 279620.27 00:23:30.718 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme4n1 ended in about 0.95 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme4n1 : 0.95 201.31 12.58 67.10 0.00 221459.09 13926.40 265639.25 00:23:30.718 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme5n1 ended in about 0.97 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme5n1 : 0.97 131.76 8.23 65.88 0.00 294950.12 19333.12 277872.64 00:23:30.718 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme6n1 ended in about 0.97 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme6n1 : 0.97 131.42 8.21 65.71 0.00 289539.41 16930.13 300591.79 00:23:30.718 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme7n1 ended in about 0.98 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme7n1 : 0.98 196.63 12.29 65.54 0.00 212955.84 12997.97 279620.27 00:23:30.718 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme8n1 ended in about 0.98 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme8n1 : 0.98 135.86 8.49 65.38 0.00 271467.63 14199.47 276125.01 00:23:30.718 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme9n1 ended in about 0.99 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme9n1 : 0.99 129.46 8.09 64.73 0.00 275566.93 34515.63 274377.39 00:23:30.718 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:30.718 Job: Nvme10n1 ended in about 0.98 seconds with error 00:23:30.718 Verification LBA range: start 0x0 length 0x400 00:23:30.718 Nvme10n1 : 0.98 130.42 8.15 65.21 0.00 266790.40 17694.72 256901.12 00:23:30.718 [2024-11-15T10:46:56.216Z] =================================================================================================================== 00:23:30.718 [2024-11-15T10:46:56.216Z] Total : 1588.34 99.27 658.78 0.00 256916.72 12997.97 300591.79 00:23:30.979 [2024-11-15 11:46:56.177930] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:30.979 [2024-11-15 11:46:56.177984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:23:30.979 [2024-11-15 11:46:56.178393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.979 [2024-11-15 11:46:56.178415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0dcb0 with addr=10.0.0.2, port=4420 00:23:30.979 [2024-11-15 11:46:56.178425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0dcb0 is same with the state(6) to be set 00:23:30.979 [2024-11-15 11:46:56.178604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.979 [2024-11-15 11:46:56.178615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0bfc0 with addr=10.0.0.2, port=4420 00:23:30.979 [2024-11-15 11:46:56.178624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0bfc0 is same with the state(6) to be set 00:23:30.979 [2024-11-15 11:46:56.178637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc25610 (9): Bad file descriptor 00:23:30.979 [2024-11-15 11:46:56.178650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117bc90 (9): Bad file descriptor 00:23:30.979 [2024-11-15 11:46:56.178661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165450 (9): Bad file descriptor 00:23:30.979 [2024-11-15 11:46:56.178677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:30.979 [2024-11-15 11:46:56.178685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:30.979 [2024-11-15 11:46:56.178695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:30.979 [2024-11-15 11:46:56.178705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:30.979 [2024-11-15 11:46:56.178714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:30.979 [2024-11-15 11:46:56.178721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:30.979 [2024-11-15 11:46:56.178729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:30.979 [2024-11-15 11:46:56.178736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:30.979 [2024-11-15 11:46:56.178744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:30.979 [2024-11-15 11:46:56.178751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:30.979 [2024-11-15 11:46:56.178758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:30.979 [2024-11-15 11:46:56.178765] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:30.979 [2024-11-15 11:46:56.178773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:30.979 [2024-11-15 11:46:56.178779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:30.979 [2024-11-15 11:46:56.178787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:30.979 [2024-11-15 11:46:56.178794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:30.979 [2024-11-15 11:46:56.179253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.979 [2024-11-15 11:46:56.179269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1167a20 with addr=10.0.0.2, port=4420 00:23:30.979 [2024-11-15 11:46:56.179278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1167a20 is same with the state(6) to be set 00:23:30.979 [2024-11-15 11:46:56.179288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0dcb0 (9): Bad file descriptor 00:23:30.979 [2024-11-15 11:46:56.179298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0bfc0 (9): Bad file descriptor 00:23:30.979 [2024-11-15 11:46:56.179307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:30.979 [2024-11-15 11:46:56.179315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:30.979 [2024-11-15 11:46:56.179322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:30.979 [2024-11-15 11:46:56.179329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:30.979 [2024-11-15 11:46:56.179338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:30.979 [2024-11-15 11:46:56.179345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:30.979 [2024-11-15 11:46:56.179352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:30.979 [2024-11-15 11:46:56.179359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:30.979 [2024-11-15 11:46:56.179370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.179377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.179384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.179391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.179462] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:23:30.980 [2024-11-15 11:46:56.179476] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:23:30.980 [2024-11-15 11:46:56.179815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1167a20 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.179829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.179837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.179844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.179851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.179859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.179866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.179873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.179880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.179920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.179932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.179942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.179951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.179961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.179971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.179980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:23:30.980 [2024-11-15 11:46:56.180029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.180038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.180044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.180051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.180343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.180357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112eba0 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.180365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112eba0 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.180694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.180706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1138e00 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.180714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138e00 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.181046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.181058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0b790 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.181065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0b790 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.181389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.181401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0d850 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.181408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0d850 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.181722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.181733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1165450 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.181741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1165450 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.182076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.182087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x117bc90 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.182095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x117bc90 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.182433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.980 [2024-11-15 11:46:56.182445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc25610 with addr=10.0.0.2, port=4420 00:23:30.980 [2024-11-15 11:46:56.182452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc25610 is same with the state(6) to be set 00:23:30.980 [2024-11-15 11:46:56.182481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112eba0 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1138e00 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0b790 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0d850 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1165450 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x117bc90 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc25610 (9): Bad file descriptor 00:23:30.980 [2024-11-15 11:46:56.182589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182598] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.182625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.182654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.182683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.182711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.182739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:23:30.980 [2024-11-15 11:46:56.182768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:23:30.980 [2024-11-15 11:46:56.182774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:23:30.980 [2024-11-15 11:46:56.182781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:23:30.980 [2024-11-15 11:46:56.182788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:23:30.980 11:46:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1141838 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1141838 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 1141838 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:23:31.922 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:31.923 rmmod nvme_tcp 00:23:31.923 rmmod nvme_fabrics 00:23:31.923 rmmod nvme_keyring 00:23:31.923 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1141452 ']' 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1141452 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 1141452 ']' 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 1141452 00:23:32.184 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1141452) - No such process 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1141452 is not found' 00:23:32.184 Process with pid 1141452 is not found 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.184 11:46:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:34.098 00:23:34.098 real 0m7.860s 00:23:34.098 user 0m19.445s 00:23:34.098 sys 0m1.289s 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:34.098 ************************************ 00:23:34.098 END TEST nvmf_shutdown_tc3 00:23:34.098 ************************************ 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:34.098 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.359 ************************************ 00:23:34.359 START TEST nvmf_shutdown_tc4 00:23:34.359 ************************************ 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:34.359 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:34.359 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:34.359 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:34.360 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:34.360 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:34.360 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:34.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:34.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:23:34.620 00:23:34.620 --- 10.0.0.2 ping statistics --- 00:23:34.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.620 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:34.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:34.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:23:34.620 00:23:34.620 --- 10.0.0.1 ping statistics --- 00:23:34.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:34.620 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1143096 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1143096 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 1143096 ']' 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:34.620 11:46:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.620 [2024-11-15 11:47:00.021786] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:34.620 [2024-11-15 11:47:00.021835] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.620 [2024-11-15 11:47:00.088440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:34.880 [2024-11-15 11:47:00.119318] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.880 [2024-11-15 11:47:00.119347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.880 [2024-11-15 11:47:00.119352] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.880 [2024-11-15 11:47:00.119357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.880 [2024-11-15 11:47:00.119361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.880 [2024-11-15 11:47:00.120625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:34.880 [2024-11-15 11:47:00.120817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:34.880 [2024-11-15 11:47:00.120972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.880 [2024-11-15 11:47:00.120974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.880 [2024-11-15 11:47:00.257601] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.880 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:34.880 Malloc1 00:23:34.880 [2024-11-15 11:47:00.375122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.140 Malloc2 00:23:35.140 Malloc3 00:23:35.140 Malloc4 00:23:35.140 Malloc5 00:23:35.140 Malloc6 00:23:35.140 Malloc7 00:23:35.140 Malloc8 00:23:35.399 Malloc9 00:23:35.399 Malloc10 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1143357 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:35.399 11:47:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:35.399 [2024-11-15 11:47:00.854159] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1143096 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1143096 ']' 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1143096 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1143096 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1143096' 00:23:40.688 killing process with pid 1143096 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 1143096 00:23:40.688 11:47:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 1143096 00:23:40.688 [2024-11-15 11:47:05.850841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a440 is same with the state(6) to be set 00:23:40.688 [2024-11-15 11:47:05.850895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a440 is same with the state(6) to be set 00:23:40.688 [2024-11-15 11:47:05.850902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1a440 is same with the state(6) to be set 00:23:40.688 [2024-11-15 11:47:05.851168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc195b0 is same with the state(6) to be set 00:23:40.688 [2024-11-15 11:47:05.851194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc195b0 is same with the state(6) to be set 00:23:40.688 [2024-11-15 11:47:05.851200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc195b0 is same with the state(6) to be set 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 [2024-11-15 11:47:05.852296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.688 Write completed with error (sct=0, sc=8) 00:23:40.688 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 [2024-11-15 11:47:05.853912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.689 [2024-11-15 11:47:05.854022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17850 is same with the state(6) to be set 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 [2024-11-15 11:47:05.854045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17850 is same with the state(6) to be set 00:23:40.689 [2024-11-15 11:47:05.854054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17850 is same with the state(6) to be set 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 [2024-11-15 11:47:05.854059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17850 is same with the state(6) to be set 00:23:40.689 starting I/O failed: -6 00:23:40.689 [2024-11-15 11:47:05.854065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc17850 is same with the state(6) to be set 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 [2024-11-15 11:47:05.855362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.689 NVMe io qpair process completion error 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 starting I/O failed: -6 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.689 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 [2024-11-15 11:47:05.856667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 [2024-11-15 11:47:05.856846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18bf0 is same with the state(6) to be set 00:23:40.690 [2024-11-15 11:47:05.856861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18bf0 is same with tWrite completed with error (sct=0, sc=8) 00:23:40.690 he state(6) to be set 00:23:40.690 [2024-11-15 11:47:05.856867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18bf0 is same with the state(6) to be set 00:23:40.690 starting I/O failed: -6 00:23:40.690 [2024-11-15 11:47:05.856872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18bf0 is same with the state(6) to be set 00:23:40.690 [2024-11-15 11:47:05.856877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc18bf0 is same with the state(6) to be set 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 [2024-11-15 11:47:05.857471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 [2024-11-15 11:47:05.858387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.690 starting I/O failed: -6 00:23:40.690 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 [2024-11-15 11:47:05.860011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.691 NVMe io qpair process completion error 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 [2024-11-15 11:47:05.861138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.691 starting I/O failed: -6 00:23:40.691 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 [2024-11-15 11:47:05.862033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 [2024-11-15 11:47:05.862949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 [2024-11-15 11:47:05.864379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.692 NVMe io qpair process completion error 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 Write completed with error (sct=0, sc=8) 00:23:40.692 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 [2024-11-15 11:47:05.865918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 [2024-11-15 11:47:05.866798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.693 starting I/O failed: -6 00:23:40.693 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 [2024-11-15 11:47:05.870206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.694 NVMe io qpair process completion error 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 [2024-11-15 11:47:05.871323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.694 starting I/O failed: -6 00:23:40.694 Write completed with error (sct=0, sc=8) 00:23:40.695 [2024-11-15 11:47:05.872139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 [2024-11-15 11:47:05.873074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 [2024-11-15 11:47:05.874736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.695 NVMe io qpair process completion error 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 starting I/O failed: -6 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.695 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 [2024-11-15 11:47:05.875822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 [2024-11-15 11:47:05.876658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 [2024-11-15 11:47:05.877603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.696 starting I/O failed: -6 00:23:40.696 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 [2024-11-15 11:47:05.880229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.697 NVMe io qpair process completion error 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 [2024-11-15 11:47:05.881434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.697 Write completed with error (sct=0, sc=8) 00:23:40.697 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 [2024-11-15 11:47:05.882333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 [2024-11-15 11:47:05.883264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 [2024-11-15 11:47:05.885110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.698 NVMe io qpair process completion error 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 starting I/O failed: -6 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.698 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 [2024-11-15 11:47:05.886308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 [2024-11-15 11:47:05.887125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 [2024-11-15 11:47:05.888084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.699 Write completed with error (sct=0, sc=8) 00:23:40.699 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 [2024-11-15 11:47:05.889514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.700 NVMe io qpair process completion error 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 [2024-11-15 11:47:05.891555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 Write completed with error (sct=0, sc=8) 00:23:40.700 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 [2024-11-15 11:47:05.892485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 [2024-11-15 11:47:05.895082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.701 NVMe io qpair process completion error 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 starting I/O failed: -6 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.701 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 [2024-11-15 11:47:05.896257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 [2024-11-15 11:47:05.897074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 [2024-11-15 11:47:05.898018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.702 Write completed with error (sct=0, sc=8) 00:23:40.702 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 Write completed with error (sct=0, sc=8) 00:23:40.703 starting I/O failed: -6 00:23:40.703 [2024-11-15 11:47:05.899882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:40.703 NVMe io qpair process completion error 00:23:40.703 Initializing NVMe Controllers 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:40.703 Controller IO queue size 128, less than required. 00:23:40.703 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:40.703 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:40.703 Initialization complete. Launching workers. 00:23:40.703 ======================================================== 00:23:40.703 Latency(us) 00:23:40.703 Device Information : IOPS MiB/s Average min max 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1862.72 80.04 68734.94 857.48 128526.34 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1880.59 80.81 68107.64 666.31 125113.34 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1881.67 80.85 68088.22 558.12 123050.63 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1899.33 81.61 67489.42 818.75 123097.58 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1858.63 79.86 68270.21 506.44 124694.58 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1892.87 81.33 67054.05 805.47 124333.69 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1851.30 79.55 68580.48 838.29 124054.73 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1890.07 81.21 67200.33 575.82 118298.19 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1884.26 80.96 67438.63 802.58 125522.02 00:23:40.703 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1882.96 80.91 67507.84 621.40 124568.71 00:23:40.703 ======================================================== 00:23:40.703 Total : 18784.40 807.14 67843.38 506.44 128526.34 00:23:40.703 00:23:40.703 [2024-11-15 11:47:05.902687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x733ae0 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x731560 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x731890 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732410 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x733720 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x733900 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x731ef0 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732a70 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x731bc0 is same with the state(6) to be set 00:23:40.703 [2024-11-15 11:47:05.902974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x732740 is same with the state(6) to be set 00:23:40.703 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:40.704 11:47:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1143357 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1143357 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 1143357 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:41.646 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:41.647 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:41.647 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:41.647 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:41.647 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:41.647 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:41.647 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:41.647 rmmod nvme_tcp 00:23:41.647 rmmod nvme_fabrics 00:23:41.907 rmmod nvme_keyring 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1143096 ']' 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1143096 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 1143096 ']' 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 1143096 00:23:41.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1143096) - No such process 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 1143096 is not found' 00:23:41.907 Process with pid 1143096 is not found 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.907 11:47:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.824 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:43.824 00:23:43.824 real 0m9.675s 00:23:43.824 user 0m25.555s 00:23:43.824 sys 0m3.955s 00:23:43.824 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:43.824 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:43.824 ************************************ 00:23:43.824 END TEST nvmf_shutdown_tc4 00:23:43.824 ************************************ 00:23:43.824 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:43.824 00:23:43.824 real 0m43.087s 00:23:43.824 user 1m43.687s 00:23:43.824 sys 0m13.957s 00:23:43.824 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:43.824 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:43.824 ************************************ 00:23:43.824 END TEST nvmf_shutdown 00:23:43.824 ************************************ 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:44.085 ************************************ 00:23:44.085 START TEST nvmf_nsid 00:23:44.085 ************************************ 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:23:44.085 * Looking for test storage... 00:23:44.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:23:44.085 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:44.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.347 --rc genhtml_branch_coverage=1 00:23:44.347 --rc genhtml_function_coverage=1 00:23:44.347 --rc genhtml_legend=1 00:23:44.347 --rc geninfo_all_blocks=1 00:23:44.347 --rc geninfo_unexecuted_blocks=1 00:23:44.347 00:23:44.347 ' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:44.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.347 --rc genhtml_branch_coverage=1 00:23:44.347 --rc genhtml_function_coverage=1 00:23:44.347 --rc genhtml_legend=1 00:23:44.347 --rc geninfo_all_blocks=1 00:23:44.347 --rc geninfo_unexecuted_blocks=1 00:23:44.347 00:23:44.347 ' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:44.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.347 --rc genhtml_branch_coverage=1 00:23:44.347 --rc genhtml_function_coverage=1 00:23:44.347 --rc genhtml_legend=1 00:23:44.347 --rc geninfo_all_blocks=1 00:23:44.347 --rc geninfo_unexecuted_blocks=1 00:23:44.347 00:23:44.347 ' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:44.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:44.347 --rc genhtml_branch_coverage=1 00:23:44.347 --rc genhtml_function_coverage=1 00:23:44.347 --rc genhtml_legend=1 00:23:44.347 --rc geninfo_all_blocks=1 00:23:44.347 --rc geninfo_unexecuted_blocks=1 00:23:44.347 00:23:44.347 ' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.347 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:44.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:23:44.348 11:47:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:52.493 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.493 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:52.494 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:52.494 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:52.494 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:52.494 11:47:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:52.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:52.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:23:52.494 00:23:52.494 --- 10.0.0.2 ping statistics --- 00:23:52.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.494 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:52.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:52.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:23:52.494 00:23:52.494 --- 10.0.0.1 ping statistics --- 00:23:52.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:52.494 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1148712 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1148712 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1148712 ']' 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:52.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.494 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:52.494 [2024-11-15 11:47:17.186397] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:52.494 [2024-11-15 11:47:17.186459] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:52.494 [2024-11-15 11:47:17.284390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.494 [2024-11-15 11:47:17.336012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.494 [2024-11-15 11:47:17.336060] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.494 [2024-11-15 11:47:17.336069] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.494 [2024-11-15 11:47:17.336076] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.494 [2024-11-15 11:47:17.336083] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.494 [2024-11-15 11:47:17.336861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.756 11:47:17 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1148926 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=e0cd3915-c4a0-49d5-9c87-b985d66122bd 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=adaa7366-f2fd-429b-808d-73d7005985a1 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=27063c95-f6f9-4761-b1fc-122b6dcdd3cd 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:52.756 null0 00:23:52.756 null1 00:23:52.756 null2 00:23:52.756 [2024-11-15 11:47:18.106171] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:23:52.756 [2024-11-15 11:47:18.106237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1148926 ] 00:23:52.756 [2024-11-15 11:47:18.108238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.756 [2024-11-15 11:47:18.132523] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1148926 /var/tmp/tgt2.sock 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 1148926 ']' 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:23:52.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:52.756 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:52.756 [2024-11-15 11:47:18.197384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.756 [2024-11-15 11:47:18.250666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.018 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.018 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:23:53.018 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:23:53.589 [2024-11-15 11:47:18.806879] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.589 [2024-11-15 11:47:18.823071] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:23:53.589 nvme0n1 nvme0n2 00:23:53.589 nvme1n1 00:23:53.589 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:23:53.589 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:23:53.589 11:47:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:23:54.976 11:47:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid e0cd3915-c4a0-49d5-9c87-b985d66122bd 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:23:55.921 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=e0cd3915c4a049d59c87b985d66122bd 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo E0CD3915C4A049D59C87B985D66122BD 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ E0CD3915C4A049D59C87B985D66122BD == \E\0\C\D\3\9\1\5\C\4\A\0\4\9\D\5\9\C\8\7\B\9\8\5\D\6\6\1\2\2\B\D ]] 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid adaa7366-f2fd-429b-808d-73d7005985a1 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=adaa7366f2fd429b808d73d7005985a1 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ADAA7366F2FD429B808D73D7005985A1 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ ADAA7366F2FD429B808D73D7005985A1 == \A\D\A\A\7\3\6\6\F\2\F\D\4\2\9\B\8\0\8\D\7\3\D\7\0\0\5\9\8\5\A\1 ]] 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 27063c95-f6f9-4761-b1fc-122b6dcdd3cd 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=27063c95f6f94761b1fc122b6dcdd3cd 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 27063C95F6F94761B1FC122B6DCDD3CD 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 27063C95F6F94761B1FC122B6DCDD3CD == \2\7\0\6\3\C\9\5\F\6\F\9\4\7\6\1\B\1\F\C\1\2\2\B\6\D\C\D\D\3\C\D ]] 00:23:56.182 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1148926 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1148926 ']' 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1148926 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1148926 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1148926' 00:23:56.443 killing process with pid 1148926 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1148926 00:23:56.443 11:47:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1148926 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.704 rmmod nvme_tcp 00:23:56.704 rmmod nvme_fabrics 00:23:56.704 rmmod nvme_keyring 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1148712 ']' 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1148712 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 1148712 ']' 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 1148712 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1148712 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1148712' 00:23:56.704 killing process with pid 1148712 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 1148712 00:23:56.704 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 1148712 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.965 11:47:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.877 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.877 00:23:58.877 real 0m14.948s 00:23:58.877 user 0m11.439s 00:23:58.877 sys 0m6.857s 00:23:58.877 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:58.877 11:47:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:23:58.877 ************************************ 00:23:58.877 END TEST nvmf_nsid 00:23:58.877 ************************************ 00:23:59.137 11:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:59.138 00:23:59.138 real 13m6.341s 00:23:59.138 user 27m28.024s 00:23:59.138 sys 3m56.248s 00:23:59.138 11:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:59.138 11:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:59.138 ************************************ 00:23:59.138 END TEST nvmf_target_extra 00:23:59.138 ************************************ 00:23:59.138 11:47:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:59.138 11:47:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:59.138 11:47:24 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:59.138 11:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:59.138 ************************************ 00:23:59.138 START TEST nvmf_host 00:23:59.138 ************************************ 00:23:59.138 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:59.138 * Looking for test storage... 00:23:59.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:59.138 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:59.138 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:59.138 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.399 --rc genhtml_branch_coverage=1 00:23:59.399 --rc genhtml_function_coverage=1 00:23:59.399 --rc genhtml_legend=1 00:23:59.399 --rc geninfo_all_blocks=1 00:23:59.399 --rc geninfo_unexecuted_blocks=1 00:23:59.399 00:23:59.399 ' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.399 --rc genhtml_branch_coverage=1 00:23:59.399 --rc genhtml_function_coverage=1 00:23:59.399 --rc genhtml_legend=1 00:23:59.399 --rc geninfo_all_blocks=1 00:23:59.399 --rc geninfo_unexecuted_blocks=1 00:23:59.399 00:23:59.399 ' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.399 --rc genhtml_branch_coverage=1 00:23:59.399 --rc genhtml_function_coverage=1 00:23:59.399 --rc genhtml_legend=1 00:23:59.399 --rc geninfo_all_blocks=1 00:23:59.399 --rc geninfo_unexecuted_blocks=1 00:23:59.399 00:23:59.399 ' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:59.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.399 --rc genhtml_branch_coverage=1 00:23:59.399 --rc genhtml_function_coverage=1 00:23:59.399 --rc genhtml_legend=1 00:23:59.399 --rc geninfo_all_blocks=1 00:23:59.399 --rc geninfo_unexecuted_blocks=1 00:23:59.399 00:23:59.399 ' 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.399 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.400 ************************************ 00:23:59.400 START TEST nvmf_multicontroller 00:23:59.400 ************************************ 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:59.400 * Looking for test storage... 00:23:59.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:59.400 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:59.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.662 --rc genhtml_branch_coverage=1 00:23:59.662 --rc genhtml_function_coverage=1 00:23:59.662 --rc genhtml_legend=1 00:23:59.662 --rc geninfo_all_blocks=1 00:23:59.662 --rc geninfo_unexecuted_blocks=1 00:23:59.662 00:23:59.662 ' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:59.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.662 --rc genhtml_branch_coverage=1 00:23:59.662 --rc genhtml_function_coverage=1 00:23:59.662 --rc genhtml_legend=1 00:23:59.662 --rc geninfo_all_blocks=1 00:23:59.662 --rc geninfo_unexecuted_blocks=1 00:23:59.662 00:23:59.662 ' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:59.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.662 --rc genhtml_branch_coverage=1 00:23:59.662 --rc genhtml_function_coverage=1 00:23:59.662 --rc genhtml_legend=1 00:23:59.662 --rc geninfo_all_blocks=1 00:23:59.662 --rc geninfo_unexecuted_blocks=1 00:23:59.662 00:23:59.662 ' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:59.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:59.662 --rc genhtml_branch_coverage=1 00:23:59.662 --rc genhtml_function_coverage=1 00:23:59.662 --rc genhtml_legend=1 00:23:59.662 --rc geninfo_all_blocks=1 00:23:59.662 --rc geninfo_unexecuted_blocks=1 00:23:59.662 00:23:59.662 ' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:59.662 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:59.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.663 11:47:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.663 11:47:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:59.663 11:47:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:59.663 11:47:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:59.663 11:47:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:07.805 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:07.806 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:07.806 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:07.806 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:07.806 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:07.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:24:07.806 00:24:07.806 --- 10.0.0.2 ping statistics --- 00:24:07.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.806 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:24:07.806 00:24:07.806 --- 10.0.0.1 ping statistics --- 00:24:07.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.806 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1154083 00:24:07.806 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1154083 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1154083 ']' 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:07.807 11:47:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:07.807 [2024-11-15 11:47:32.626232] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:07.807 [2024-11-15 11:47:32.626301] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.807 [2024-11-15 11:47:32.727143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:07.807 [2024-11-15 11:47:32.779586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.807 [2024-11-15 11:47:32.779634] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.807 [2024-11-15 11:47:32.779642] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.807 [2024-11-15 11:47:32.779650] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.807 [2024-11-15 11:47:32.779656] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.807 [2024-11-15 11:47:32.781527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.807 [2024-11-15 11:47:32.781693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.807 [2024-11-15 11:47:32.781695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.068 [2024-11-15 11:47:33.510829] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.068 Malloc0 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.068 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 [2024-11-15 11:47:33.581850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 [2024-11-15 11:47:33.593721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 Malloc1 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:08.330 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1154192 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1154192 /var/tmp/bdevperf.sock 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 1154192 ']' 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:08.331 11:47:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.275 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:09.275 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:24:09.275 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:09.275 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.275 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.537 NVMe0n1 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.537 1 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.537 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.537 request: 00:24:09.537 { 00:24:09.537 "name": "NVMe0", 00:24:09.538 "trtype": "tcp", 00:24:09.538 "traddr": "10.0.0.2", 00:24:09.538 "adrfam": "ipv4", 00:24:09.538 "trsvcid": "4420", 00:24:09.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.538 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:09.538 "hostaddr": "10.0.0.1", 00:24:09.538 "prchk_reftag": false, 00:24:09.538 "prchk_guard": false, 00:24:09.538 "hdgst": false, 00:24:09.538 "ddgst": false, 00:24:09.538 "allow_unrecognized_csi": false, 00:24:09.538 "method": "bdev_nvme_attach_controller", 00:24:09.538 "req_id": 1 00:24:09.538 } 00:24:09.538 Got JSON-RPC error response 00:24:09.538 response: 00:24:09.538 { 00:24:09.538 "code": -114, 00:24:09.538 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:09.538 } 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.538 request: 00:24:09.538 { 00:24:09.538 "name": "NVMe0", 00:24:09.538 "trtype": "tcp", 00:24:09.538 "traddr": "10.0.0.2", 00:24:09.538 "adrfam": "ipv4", 00:24:09.538 "trsvcid": "4420", 00:24:09.538 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:09.538 "hostaddr": "10.0.0.1", 00:24:09.538 "prchk_reftag": false, 00:24:09.538 "prchk_guard": false, 00:24:09.538 "hdgst": false, 00:24:09.538 "ddgst": false, 00:24:09.538 "allow_unrecognized_csi": false, 00:24:09.538 "method": "bdev_nvme_attach_controller", 00:24:09.538 "req_id": 1 00:24:09.538 } 00:24:09.538 Got JSON-RPC error response 00:24:09.538 response: 00:24:09.538 { 00:24:09.538 "code": -114, 00:24:09.538 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:09.538 } 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.538 request: 00:24:09.538 { 00:24:09.538 "name": "NVMe0", 00:24:09.538 "trtype": "tcp", 00:24:09.538 "traddr": "10.0.0.2", 00:24:09.538 "adrfam": "ipv4", 00:24:09.538 "trsvcid": "4420", 00:24:09.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.538 "hostaddr": "10.0.0.1", 00:24:09.538 "prchk_reftag": false, 00:24:09.538 "prchk_guard": false, 00:24:09.538 "hdgst": false, 00:24:09.538 "ddgst": false, 00:24:09.538 "multipath": "disable", 00:24:09.538 "allow_unrecognized_csi": false, 00:24:09.538 "method": "bdev_nvme_attach_controller", 00:24:09.538 "req_id": 1 00:24:09.538 } 00:24:09.538 Got JSON-RPC error response 00:24:09.538 response: 00:24:09.538 { 00:24:09.538 "code": -114, 00:24:09.538 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:09.538 } 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.538 request: 00:24:09.538 { 00:24:09.538 "name": "NVMe0", 00:24:09.538 "trtype": "tcp", 00:24:09.538 "traddr": "10.0.0.2", 00:24:09.538 "adrfam": "ipv4", 00:24:09.538 "trsvcid": "4420", 00:24:09.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:09.538 "hostaddr": "10.0.0.1", 00:24:09.538 "prchk_reftag": false, 00:24:09.538 "prchk_guard": false, 00:24:09.538 "hdgst": false, 00:24:09.538 "ddgst": false, 00:24:09.538 "multipath": "failover", 00:24:09.538 "allow_unrecognized_csi": false, 00:24:09.538 "method": "bdev_nvme_attach_controller", 00:24:09.538 "req_id": 1 00:24:09.538 } 00:24:09.538 Got JSON-RPC error response 00:24:09.538 response: 00:24:09.538 { 00:24:09.538 "code": -114, 00:24:09.538 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:09.538 } 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.538 11:47:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.538 NVMe0n1 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.538 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.799 00:24:09.799 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.799 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:09.799 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:09.799 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.800 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:09.800 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.800 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:09.800 11:47:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:11.187 { 00:24:11.187 "results": [ 00:24:11.187 { 00:24:11.187 "job": "NVMe0n1", 00:24:11.187 "core_mask": "0x1", 00:24:11.187 "workload": "write", 00:24:11.187 "status": "finished", 00:24:11.187 "queue_depth": 128, 00:24:11.187 "io_size": 4096, 00:24:11.187 "runtime": 1.005528, 00:24:11.187 "iops": 28705.317007582085, 00:24:11.187 "mibps": 112.13014456086752, 00:24:11.187 "io_failed": 0, 00:24:11.187 "io_timeout": 0, 00:24:11.187 "avg_latency_us": 4448.895432372506, 00:24:11.187 "min_latency_us": 2102.6133333333332, 00:24:11.187 "max_latency_us": 9666.56 00:24:11.187 } 00:24:11.187 ], 00:24:11.187 "core_count": 1 00:24:11.187 } 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1154192 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1154192 ']' 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1154192 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1154192 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1154192' 00:24:11.187 killing process with pid 1154192 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1154192 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1154192 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:24:11.187 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:11.187 [2024-11-15 11:47:33.724534] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:11.187 [2024-11-15 11:47:33.724619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154192 ] 00:24:11.187 [2024-11-15 11:47:33.819470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.187 [2024-11-15 11:47:33.873001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.187 [2024-11-15 11:47:35.263868] bdev.c:4917:bdev_name_add: *ERROR*: Bdev name 017f9698-1681-4cfd-af8f-7ef6bfb3d371 already exists 00:24:11.187 [2024-11-15 11:47:35.263915] bdev.c:8146:bdev_register: *ERROR*: Unable to add uuid:017f9698-1681-4cfd-af8f-7ef6bfb3d371 alias for bdev NVMe1n1 00:24:11.187 [2024-11-15 11:47:35.263925] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:11.187 Running I/O for 1 seconds... 00:24:11.187 28672.00 IOPS, 112.00 MiB/s 00:24:11.187 Latency(us) 00:24:11.187 [2024-11-15T10:47:36.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.187 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:11.187 NVMe0n1 : 1.01 28705.32 112.13 0.00 0.00 4448.90 2102.61 9666.56 00:24:11.187 [2024-11-15T10:47:36.685Z] =================================================================================================================== 00:24:11.187 [2024-11-15T10:47:36.685Z] Total : 28705.32 112.13 0.00 0.00 4448.90 2102.61 9666.56 00:24:11.187 Received shutdown signal, test time was about 1.000000 seconds 00:24:11.187 00:24:11.187 Latency(us) 00:24:11.187 [2024-11-15T10:47:36.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.187 [2024-11-15T10:47:36.685Z] =================================================================================================================== 00:24:11.187 [2024-11-15T10:47:36.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.187 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.187 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.187 rmmod nvme_tcp 00:24:11.449 rmmod nvme_fabrics 00:24:11.449 rmmod nvme_keyring 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1154083 ']' 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1154083 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 1154083 ']' 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 1154083 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1154083 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1154083' 00:24:11.449 killing process with pid 1154083 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 1154083 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 1154083 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.449 11:47:36 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.996 11:47:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.996 00:24:13.997 real 0m14.261s 00:24:13.997 user 0m17.936s 00:24:13.997 sys 0m6.552s 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.997 ************************************ 00:24:13.997 END TEST nvmf_multicontroller 00:24:13.997 ************************************ 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.997 ************************************ 00:24:13.997 START TEST nvmf_aer 00:24:13.997 ************************************ 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:13.997 * Looking for test storage... 00:24:13.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.997 --rc genhtml_branch_coverage=1 00:24:13.997 --rc genhtml_function_coverage=1 00:24:13.997 --rc genhtml_legend=1 00:24:13.997 --rc geninfo_all_blocks=1 00:24:13.997 --rc geninfo_unexecuted_blocks=1 00:24:13.997 00:24:13.997 ' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.997 --rc genhtml_branch_coverage=1 00:24:13.997 --rc genhtml_function_coverage=1 00:24:13.997 --rc genhtml_legend=1 00:24:13.997 --rc geninfo_all_blocks=1 00:24:13.997 --rc geninfo_unexecuted_blocks=1 00:24:13.997 00:24:13.997 ' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.997 --rc genhtml_branch_coverage=1 00:24:13.997 --rc genhtml_function_coverage=1 00:24:13.997 --rc genhtml_legend=1 00:24:13.997 --rc geninfo_all_blocks=1 00:24:13.997 --rc geninfo_unexecuted_blocks=1 00:24:13.997 00:24:13.997 ' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:13.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:13.997 --rc genhtml_branch_coverage=1 00:24:13.997 --rc genhtml_function_coverage=1 00:24:13.997 --rc genhtml_legend=1 00:24:13.997 --rc geninfo_all_blocks=1 00:24:13.997 --rc geninfo_unexecuted_blocks=1 00:24:13.997 00:24:13.997 ' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.997 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:13.998 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:24:13.998 11:47:39 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:22.145 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:22.145 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:22.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.145 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:22.146 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.559 ms 00:24:22.146 00:24:22.146 --- 10.0.0.2 ping statistics --- 00:24:22.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.146 rtt min/avg/max/mdev = 0.559/0.559/0.559/0.000 ms 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:22.146 00:24:22.146 --- 10.0.0.1 ping statistics --- 00:24:22.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.146 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1159068 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1159068 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 1159068 ']' 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:22.146 11:47:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.146 [2024-11-15 11:47:46.931076] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:22.146 [2024-11-15 11:47:46.931147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.146 [2024-11-15 11:47:47.030705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.146 [2024-11-15 11:47:47.085181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.146 [2024-11-15 11:47:47.085233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.146 [2024-11-15 11:47:47.085241] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.146 [2024-11-15 11:47:47.085248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.146 [2024-11-15 11:47:47.085254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.146 [2024-11-15 11:47:47.087379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.146 [2024-11-15 11:47:47.087544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.146 [2024-11-15 11:47:47.087713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.146 [2024-11-15 11:47:47.087713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.408 [2024-11-15 11:47:47.817813] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.408 Malloc0 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.408 [2024-11-15 11:47:47.892246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.408 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.409 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:22.409 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.409 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.670 [ 00:24:22.670 { 00:24:22.670 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:22.670 "subtype": "Discovery", 00:24:22.670 "listen_addresses": [], 00:24:22.670 "allow_any_host": true, 00:24:22.670 "hosts": [] 00:24:22.670 }, 00:24:22.670 { 00:24:22.670 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.670 "subtype": "NVMe", 00:24:22.670 "listen_addresses": [ 00:24:22.670 { 00:24:22.670 "trtype": "TCP", 00:24:22.670 "adrfam": "IPv4", 00:24:22.670 "traddr": "10.0.0.2", 00:24:22.670 "trsvcid": "4420" 00:24:22.670 } 00:24:22.671 ], 00:24:22.671 "allow_any_host": true, 00:24:22.671 "hosts": [], 00:24:22.671 "serial_number": "SPDK00000000000001", 00:24:22.671 "model_number": "SPDK bdev Controller", 00:24:22.671 "max_namespaces": 2, 00:24:22.671 "min_cntlid": 1, 00:24:22.671 "max_cntlid": 65519, 00:24:22.671 "namespaces": [ 00:24:22.671 { 00:24:22.671 "nsid": 1, 00:24:22.671 "bdev_name": "Malloc0", 00:24:22.671 "name": "Malloc0", 00:24:22.671 "nguid": "8B1B68A6AEEA4C3DB1BF5A102CF1DD75", 00:24:22.671 "uuid": "8b1b68a6-aeea-4c3d-b1bf-5a102cf1dd75" 00:24:22.671 } 00:24:22.671 ] 00:24:22.671 } 00:24:22.671 ] 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1159228 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:24:22.671 11:47:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:24:22.671 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.932 Malloc1 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.932 Asynchronous Event Request test 00:24:22.932 Attaching to 10.0.0.2 00:24:22.932 Attached to 10.0.0.2 00:24:22.932 Registering asynchronous event callbacks... 00:24:22.932 Starting namespace attribute notice tests for all controllers... 00:24:22.932 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:22.932 aer_cb - Changed Namespace 00:24:22.932 Cleaning up... 00:24:22.932 [ 00:24:22.932 { 00:24:22.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:22.932 "subtype": "Discovery", 00:24:22.932 "listen_addresses": [], 00:24:22.932 "allow_any_host": true, 00:24:22.932 "hosts": [] 00:24:22.932 }, 00:24:22.932 { 00:24:22.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:22.932 "subtype": "NVMe", 00:24:22.932 "listen_addresses": [ 00:24:22.932 { 00:24:22.932 "trtype": "TCP", 00:24:22.932 "adrfam": "IPv4", 00:24:22.932 "traddr": "10.0.0.2", 00:24:22.932 "trsvcid": "4420" 00:24:22.932 } 00:24:22.932 ], 00:24:22.932 "allow_any_host": true, 00:24:22.932 "hosts": [], 00:24:22.932 "serial_number": "SPDK00000000000001", 00:24:22.932 "model_number": "SPDK bdev Controller", 00:24:22.932 "max_namespaces": 2, 00:24:22.932 "min_cntlid": 1, 00:24:22.932 "max_cntlid": 65519, 00:24:22.932 "namespaces": [ 00:24:22.932 { 00:24:22.932 "nsid": 1, 00:24:22.932 "bdev_name": "Malloc0", 00:24:22.932 "name": "Malloc0", 00:24:22.932 "nguid": "8B1B68A6AEEA4C3DB1BF5A102CF1DD75", 00:24:22.932 "uuid": "8b1b68a6-aeea-4c3d-b1bf-5a102cf1dd75" 00:24:22.932 }, 00:24:22.932 { 00:24:22.932 "nsid": 2, 00:24:22.932 "bdev_name": "Malloc1", 00:24:22.932 "name": "Malloc1", 00:24:22.932 "nguid": "E537FDB0EBB143EA9AC33193F2905078", 00:24:22.932 "uuid": "e537fdb0-ebb1-43ea-9ac3-3193f2905078" 00:24:22.932 } 00:24:22.932 ] 00:24:22.932 } 00:24:22.932 ] 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1159228 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:22.932 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:22.933 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:22.933 rmmod nvme_tcp 00:24:22.933 rmmod nvme_fabrics 00:24:23.194 rmmod nvme_keyring 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1159068 ']' 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1159068 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 1159068 ']' 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 1159068 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1159068 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:23.194 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:23.195 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1159068' 00:24:23.195 killing process with pid 1159068 00:24:23.195 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 1159068 00:24:23.195 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 1159068 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.456 11:47:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:25.369 00:24:25.369 real 0m11.689s 00:24:25.369 user 0m8.656s 00:24:25.369 sys 0m6.234s 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:25.369 ************************************ 00:24:25.369 END TEST nvmf_aer 00:24:25.369 ************************************ 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:25.369 11:47:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.633 ************************************ 00:24:25.633 START TEST nvmf_async_init 00:24:25.633 ************************************ 00:24:25.633 11:47:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:25.633 * Looking for test storage... 00:24:25.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.633 11:47:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:25.633 11:47:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:24:25.633 11:47:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.633 --rc genhtml_branch_coverage=1 00:24:25.633 --rc genhtml_function_coverage=1 00:24:25.633 --rc genhtml_legend=1 00:24:25.633 --rc geninfo_all_blocks=1 00:24:25.633 --rc geninfo_unexecuted_blocks=1 00:24:25.633 00:24:25.633 ' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.633 --rc genhtml_branch_coverage=1 00:24:25.633 --rc genhtml_function_coverage=1 00:24:25.633 --rc genhtml_legend=1 00:24:25.633 --rc geninfo_all_blocks=1 00:24:25.633 --rc geninfo_unexecuted_blocks=1 00:24:25.633 00:24:25.633 ' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.633 --rc genhtml_branch_coverage=1 00:24:25.633 --rc genhtml_function_coverage=1 00:24:25.633 --rc genhtml_legend=1 00:24:25.633 --rc geninfo_all_blocks=1 00:24:25.633 --rc geninfo_unexecuted_blocks=1 00:24:25.633 00:24:25.633 ' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.633 --rc genhtml_branch_coverage=1 00:24:25.633 --rc genhtml_function_coverage=1 00:24:25.633 --rc genhtml_legend=1 00:24:25.633 --rc geninfo_all_blocks=1 00:24:25.633 --rc geninfo_unexecuted_blocks=1 00:24:25.633 00:24:25.633 ' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.633 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:25.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=acf8b81b39b84afba5811f80b60dfcb8 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:24:25.634 11:47:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:33.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:33.777 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:33.777 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.777 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:33.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.692 ms 00:24:33.778 00:24:33.778 --- 10.0.0.2 ping statistics --- 00:24:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.778 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:24:33.778 00:24:33.778 --- 10.0.0.1 ping statistics --- 00:24:33.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.778 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1163562 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1163562 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 1163562 ']' 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:33.778 11:47:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:33.778 [2024-11-15 11:47:58.715183] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:33.778 [2024-11-15 11:47:58.715250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.778 [2024-11-15 11:47:58.816197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.778 [2024-11-15 11:47:58.867193] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.778 [2024-11-15 11:47:58.867244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.778 [2024-11-15 11:47:58.867253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.778 [2024-11-15 11:47:58.867261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.778 [2024-11-15 11:47:58.867267] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.778 [2024-11-15 11:47:58.868081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 [2024-11-15 11:47:59.595449] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 null0 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g acf8b81b39b84afba5811f80b60dfcb8 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.349 [2024-11-15 11:47:59.655836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.349 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.610 nvme0n1 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.610 [ 00:24:34.610 { 00:24:34.610 "name": "nvme0n1", 00:24:34.610 "aliases": [ 00:24:34.610 "acf8b81b-39b8-4afb-a581-1f80b60dfcb8" 00:24:34.610 ], 00:24:34.610 "product_name": "NVMe disk", 00:24:34.610 "block_size": 512, 00:24:34.610 "num_blocks": 2097152, 00:24:34.610 "uuid": "acf8b81b-39b8-4afb-a581-1f80b60dfcb8", 00:24:34.610 "numa_id": 0, 00:24:34.610 "assigned_rate_limits": { 00:24:34.610 "rw_ios_per_sec": 0, 00:24:34.610 "rw_mbytes_per_sec": 0, 00:24:34.610 "r_mbytes_per_sec": 0, 00:24:34.610 "w_mbytes_per_sec": 0 00:24:34.610 }, 00:24:34.610 "claimed": false, 00:24:34.610 "zoned": false, 00:24:34.610 "supported_io_types": { 00:24:34.610 "read": true, 00:24:34.610 "write": true, 00:24:34.610 "unmap": false, 00:24:34.610 "flush": true, 00:24:34.610 "reset": true, 00:24:34.610 "nvme_admin": true, 00:24:34.610 "nvme_io": true, 00:24:34.610 "nvme_io_md": false, 00:24:34.610 "write_zeroes": true, 00:24:34.610 "zcopy": false, 00:24:34.610 "get_zone_info": false, 00:24:34.610 "zone_management": false, 00:24:34.610 "zone_append": false, 00:24:34.610 "compare": true, 00:24:34.610 "compare_and_write": true, 00:24:34.610 "abort": true, 00:24:34.610 "seek_hole": false, 00:24:34.610 "seek_data": false, 00:24:34.610 "copy": true, 00:24:34.610 "nvme_iov_md": false 00:24:34.610 }, 00:24:34.610 "memory_domains": [ 00:24:34.610 { 00:24:34.610 "dma_device_id": "system", 00:24:34.610 "dma_device_type": 1 00:24:34.610 } 00:24:34.610 ], 00:24:34.610 "driver_specific": { 00:24:34.610 "nvme": [ 00:24:34.610 { 00:24:34.610 "trid": { 00:24:34.610 "trtype": "TCP", 00:24:34.610 "adrfam": "IPv4", 00:24:34.610 "traddr": "10.0.0.2", 00:24:34.610 "trsvcid": "4420", 00:24:34.610 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:34.610 }, 00:24:34.610 "ctrlr_data": { 00:24:34.610 "cntlid": 1, 00:24:34.610 "vendor_id": "0x8086", 00:24:34.610 "model_number": "SPDK bdev Controller", 00:24:34.610 "serial_number": "00000000000000000000", 00:24:34.610 "firmware_revision": "25.01", 00:24:34.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:34.610 "oacs": { 00:24:34.610 "security": 0, 00:24:34.610 "format": 0, 00:24:34.610 "firmware": 0, 00:24:34.610 "ns_manage": 0 00:24:34.610 }, 00:24:34.610 "multi_ctrlr": true, 00:24:34.610 "ana_reporting": false 00:24:34.610 }, 00:24:34.610 "vs": { 00:24:34.610 "nvme_version": "1.3" 00:24:34.610 }, 00:24:34.610 "ns_data": { 00:24:34.610 "id": 1, 00:24:34.610 "can_share": true 00:24:34.610 } 00:24:34.610 } 00:24:34.610 ], 00:24:34.610 "mp_policy": "active_passive" 00:24:34.610 } 00:24:34.610 } 00:24:34.610 ] 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.610 11:47:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.610 [2024-11-15 11:47:59.932269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:34.610 [2024-11-15 11:47:59.932359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15c3ce0 (9): Bad file descriptor 00:24:34.610 [2024-11-15 11:48:00.064685] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.610 [ 00:24:34.610 { 00:24:34.610 "name": "nvme0n1", 00:24:34.610 "aliases": [ 00:24:34.610 "acf8b81b-39b8-4afb-a581-1f80b60dfcb8" 00:24:34.610 ], 00:24:34.610 "product_name": "NVMe disk", 00:24:34.610 "block_size": 512, 00:24:34.610 "num_blocks": 2097152, 00:24:34.610 "uuid": "acf8b81b-39b8-4afb-a581-1f80b60dfcb8", 00:24:34.610 "numa_id": 0, 00:24:34.610 "assigned_rate_limits": { 00:24:34.610 "rw_ios_per_sec": 0, 00:24:34.610 "rw_mbytes_per_sec": 0, 00:24:34.610 "r_mbytes_per_sec": 0, 00:24:34.610 "w_mbytes_per_sec": 0 00:24:34.610 }, 00:24:34.610 "claimed": false, 00:24:34.610 "zoned": false, 00:24:34.610 "supported_io_types": { 00:24:34.610 "read": true, 00:24:34.610 "write": true, 00:24:34.610 "unmap": false, 00:24:34.610 "flush": true, 00:24:34.610 "reset": true, 00:24:34.610 "nvme_admin": true, 00:24:34.610 "nvme_io": true, 00:24:34.610 "nvme_io_md": false, 00:24:34.610 "write_zeroes": true, 00:24:34.610 "zcopy": false, 00:24:34.610 "get_zone_info": false, 00:24:34.610 "zone_management": false, 00:24:34.610 "zone_append": false, 00:24:34.610 "compare": true, 00:24:34.610 "compare_and_write": true, 00:24:34.610 "abort": true, 00:24:34.610 "seek_hole": false, 00:24:34.610 "seek_data": false, 00:24:34.610 "copy": true, 00:24:34.610 "nvme_iov_md": false 00:24:34.610 }, 00:24:34.610 "memory_domains": [ 00:24:34.610 { 00:24:34.610 "dma_device_id": "system", 00:24:34.610 "dma_device_type": 1 00:24:34.610 } 00:24:34.610 ], 00:24:34.610 "driver_specific": { 00:24:34.610 "nvme": [ 00:24:34.610 { 00:24:34.610 "trid": { 00:24:34.610 "trtype": "TCP", 00:24:34.610 "adrfam": "IPv4", 00:24:34.610 "traddr": "10.0.0.2", 00:24:34.610 "trsvcid": "4420", 00:24:34.610 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:34.610 }, 00:24:34.610 "ctrlr_data": { 00:24:34.610 "cntlid": 2, 00:24:34.610 "vendor_id": "0x8086", 00:24:34.610 "model_number": "SPDK bdev Controller", 00:24:34.610 "serial_number": "00000000000000000000", 00:24:34.610 "firmware_revision": "25.01", 00:24:34.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:34.610 "oacs": { 00:24:34.610 "security": 0, 00:24:34.610 "format": 0, 00:24:34.610 "firmware": 0, 00:24:34.610 "ns_manage": 0 00:24:34.610 }, 00:24:34.610 "multi_ctrlr": true, 00:24:34.610 "ana_reporting": false 00:24:34.610 }, 00:24:34.610 "vs": { 00:24:34.610 "nvme_version": "1.3" 00:24:34.610 }, 00:24:34.610 "ns_data": { 00:24:34.610 "id": 1, 00:24:34.610 "can_share": true 00:24:34.610 } 00:24:34.610 } 00:24:34.610 ], 00:24:34.610 "mp_policy": "active_passive" 00:24:34.610 } 00:24:34.610 } 00:24:34.610 ] 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.610 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.xQg0qbbt4W 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.xQg0qbbt4W 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.xQg0qbbt4W 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 [2024-11-15 11:48:00.156979] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:34.871 [2024-11-15 11:48:00.157193] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 [2024-11-15 11:48:00.177043] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:34.871 nvme0n1 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.871 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.871 [ 00:24:34.871 { 00:24:34.871 "name": "nvme0n1", 00:24:34.871 "aliases": [ 00:24:34.871 "acf8b81b-39b8-4afb-a581-1f80b60dfcb8" 00:24:34.871 ], 00:24:34.871 "product_name": "NVMe disk", 00:24:34.871 "block_size": 512, 00:24:34.871 "num_blocks": 2097152, 00:24:34.871 "uuid": "acf8b81b-39b8-4afb-a581-1f80b60dfcb8", 00:24:34.871 "numa_id": 0, 00:24:34.871 "assigned_rate_limits": { 00:24:34.871 "rw_ios_per_sec": 0, 00:24:34.871 "rw_mbytes_per_sec": 0, 00:24:34.871 "r_mbytes_per_sec": 0, 00:24:34.871 "w_mbytes_per_sec": 0 00:24:34.871 }, 00:24:34.871 "claimed": false, 00:24:34.871 "zoned": false, 00:24:34.871 "supported_io_types": { 00:24:34.871 "read": true, 00:24:34.871 "write": true, 00:24:34.871 "unmap": false, 00:24:34.871 "flush": true, 00:24:34.871 "reset": true, 00:24:34.871 "nvme_admin": true, 00:24:34.871 "nvme_io": true, 00:24:34.871 "nvme_io_md": false, 00:24:34.871 "write_zeroes": true, 00:24:34.871 "zcopy": false, 00:24:34.871 "get_zone_info": false, 00:24:34.871 "zone_management": false, 00:24:34.871 "zone_append": false, 00:24:34.871 "compare": true, 00:24:34.871 "compare_and_write": true, 00:24:34.871 "abort": true, 00:24:34.872 "seek_hole": false, 00:24:34.872 "seek_data": false, 00:24:34.872 "copy": true, 00:24:34.872 "nvme_iov_md": false 00:24:34.872 }, 00:24:34.872 "memory_domains": [ 00:24:34.872 { 00:24:34.872 "dma_device_id": "system", 00:24:34.872 "dma_device_type": 1 00:24:34.872 } 00:24:34.872 ], 00:24:34.872 "driver_specific": { 00:24:34.872 "nvme": [ 00:24:34.872 { 00:24:34.872 "trid": { 00:24:34.872 "trtype": "TCP", 00:24:34.872 "adrfam": "IPv4", 00:24:34.872 "traddr": "10.0.0.2", 00:24:34.872 "trsvcid": "4421", 00:24:34.872 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:34.872 }, 00:24:34.872 "ctrlr_data": { 00:24:34.872 "cntlid": 3, 00:24:34.872 "vendor_id": "0x8086", 00:24:34.872 "model_number": "SPDK bdev Controller", 00:24:34.872 "serial_number": "00000000000000000000", 00:24:34.872 "firmware_revision": "25.01", 00:24:34.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:34.872 "oacs": { 00:24:34.872 "security": 0, 00:24:34.872 "format": 0, 00:24:34.872 "firmware": 0, 00:24:34.872 "ns_manage": 0 00:24:34.872 }, 00:24:34.872 "multi_ctrlr": true, 00:24:34.872 "ana_reporting": false 00:24:34.872 }, 00:24:34.872 "vs": { 00:24:34.872 "nvme_version": "1.3" 00:24:34.872 }, 00:24:34.872 "ns_data": { 00:24:34.872 "id": 1, 00:24:34.872 "can_share": true 00:24:34.872 } 00:24:34.872 } 00:24:34.872 ], 00:24:34.872 "mp_policy": "active_passive" 00:24:34.872 } 00:24:34.872 } 00:24:34.872 ] 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.xQg0qbbt4W 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.872 rmmod nvme_tcp 00:24:34.872 rmmod nvme_fabrics 00:24:34.872 rmmod nvme_keyring 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1163562 ']' 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1163562 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 1163562 ']' 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 1163562 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:24:34.872 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1163562 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1163562' 00:24:35.132 killing process with pid 1163562 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 1163562 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 1163562 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.132 11:48:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:37.676 00:24:37.676 real 0m11.800s 00:24:37.676 user 0m4.225s 00:24:37.676 sys 0m6.159s 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:37.676 ************************************ 00:24:37.676 END TEST nvmf_async_init 00:24:37.676 ************************************ 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.676 ************************************ 00:24:37.676 START TEST dma 00:24:37.676 ************************************ 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:37.676 * Looking for test storage... 00:24:37.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:37.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.676 --rc genhtml_branch_coverage=1 00:24:37.676 --rc genhtml_function_coverage=1 00:24:37.676 --rc genhtml_legend=1 00:24:37.676 --rc geninfo_all_blocks=1 00:24:37.676 --rc geninfo_unexecuted_blocks=1 00:24:37.676 00:24:37.676 ' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:37.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.676 --rc genhtml_branch_coverage=1 00:24:37.676 --rc genhtml_function_coverage=1 00:24:37.676 --rc genhtml_legend=1 00:24:37.676 --rc geninfo_all_blocks=1 00:24:37.676 --rc geninfo_unexecuted_blocks=1 00:24:37.676 00:24:37.676 ' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:37.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.676 --rc genhtml_branch_coverage=1 00:24:37.676 --rc genhtml_function_coverage=1 00:24:37.676 --rc genhtml_legend=1 00:24:37.676 --rc geninfo_all_blocks=1 00:24:37.676 --rc geninfo_unexecuted_blocks=1 00:24:37.676 00:24:37.676 ' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:37.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.676 --rc genhtml_branch_coverage=1 00:24:37.676 --rc genhtml_function_coverage=1 00:24:37.676 --rc genhtml_legend=1 00:24:37.676 --rc geninfo_all_blocks=1 00:24:37.676 --rc geninfo_unexecuted_blocks=1 00:24:37.676 00:24:37.676 ' 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.676 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:24:37.677 00:24:37.677 real 0m0.241s 00:24:37.677 user 0m0.139s 00:24:37.677 sys 0m0.117s 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:37.677 11:48:02 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:24:37.677 ************************************ 00:24:37.677 END TEST dma 00:24:37.677 ************************************ 00:24:37.677 11:48:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:37.677 11:48:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:37.677 11:48:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:37.677 11:48:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.677 ************************************ 00:24:37.677 START TEST nvmf_identify 00:24:37.677 ************************************ 00:24:37.677 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:37.677 * Looking for test storage... 00:24:37.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:37.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:37.938 --rc genhtml_branch_coverage=1 00:24:37.938 --rc genhtml_function_coverage=1 00:24:37.938 --rc genhtml_legend=1 00:24:37.938 --rc geninfo_all_blocks=1 00:24:37.938 --rc geninfo_unexecuted_blocks=1 00:24:37.938 00:24:37.938 ' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:37.938 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:37.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:24:37.939 11:48:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.233 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:46.234 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:46.234 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:46.234 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:46.234 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:24:46.234 00:24:46.234 --- 10.0.0.2 ping statistics --- 00:24:46.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.234 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:24:46.234 00:24:46.234 --- 10.0.0.1 ping statistics --- 00:24:46.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.234 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1168777 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1168777 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 1168777 ']' 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.234 11:48:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.234 [2024-11-15 11:48:10.955357] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:46.234 [2024-11-15 11:48:10.955421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.234 [2024-11-15 11:48:11.054094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.234 [2024-11-15 11:48:11.109146] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.235 [2024-11-15 11:48:11.109198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.235 [2024-11-15 11:48:11.109207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.235 [2024-11-15 11:48:11.109214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.235 [2024-11-15 11:48:11.109221] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.235 [2024-11-15 11:48:11.111385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.235 [2024-11-15 11:48:11.111541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.235 [2024-11-15 11:48:11.111703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.235 [2024-11-15 11:48:11.111829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.521 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:46.521 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:24:46.521 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.521 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.521 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 [2024-11-15 11:48:11.782263] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 Malloc0 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 [2024-11-15 11:48:11.905455] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:46.522 [ 00:24:46.522 { 00:24:46.522 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:46.522 "subtype": "Discovery", 00:24:46.522 "listen_addresses": [ 00:24:46.522 { 00:24:46.522 "trtype": "TCP", 00:24:46.522 "adrfam": "IPv4", 00:24:46.522 "traddr": "10.0.0.2", 00:24:46.522 "trsvcid": "4420" 00:24:46.522 } 00:24:46.522 ], 00:24:46.522 "allow_any_host": true, 00:24:46.522 "hosts": [] 00:24:46.522 }, 00:24:46.522 { 00:24:46.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.522 "subtype": "NVMe", 00:24:46.522 "listen_addresses": [ 00:24:46.522 { 00:24:46.522 "trtype": "TCP", 00:24:46.522 "adrfam": "IPv4", 00:24:46.522 "traddr": "10.0.0.2", 00:24:46.522 "trsvcid": "4420" 00:24:46.522 } 00:24:46.522 ], 00:24:46.522 "allow_any_host": true, 00:24:46.522 "hosts": [], 00:24:46.522 "serial_number": "SPDK00000000000001", 00:24:46.522 "model_number": "SPDK bdev Controller", 00:24:46.522 "max_namespaces": 32, 00:24:46.522 "min_cntlid": 1, 00:24:46.522 "max_cntlid": 65519, 00:24:46.522 "namespaces": [ 00:24:46.522 { 00:24:46.522 "nsid": 1, 00:24:46.522 "bdev_name": "Malloc0", 00:24:46.522 "name": "Malloc0", 00:24:46.522 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:46.522 "eui64": "ABCDEF0123456789", 00:24:46.522 "uuid": "32d11c36-c83b-485e-b4cb-17c6987e5afe" 00:24:46.522 } 00:24:46.522 ] 00:24:46.522 } 00:24:46.522 ] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.522 11:48:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:46.522 [2024-11-15 11:48:11.969933] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:46.522 [2024-11-15 11:48:11.969984] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168988 ] 00:24:46.784 [2024-11-15 11:48:12.025394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:24:46.784 [2024-11-15 11:48:12.025461] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:46.784 [2024-11-15 11:48:12.025467] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:46.784 [2024-11-15 11:48:12.025486] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:46.784 [2024-11-15 11:48:12.025500] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:46.784 [2024-11-15 11:48:12.029989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:24:46.784 [2024-11-15 11:48:12.030041] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x138c690 0 00:24:46.784 [2024-11-15 11:48:12.030267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:46.784 [2024-11-15 11:48:12.030276] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:46.784 [2024-11-15 11:48:12.030280] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:46.784 [2024-11-15 11:48:12.030284] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:46.784 [2024-11-15 11:48:12.030320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.030326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.030331] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.030348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:46.785 [2024-11-15 11:48:12.030364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.037580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.037591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.037595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.037599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.037610] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:46.785 [2024-11-15 11:48:12.037618] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:24:46.785 [2024-11-15 11:48:12.037623] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:24:46.785 [2024-11-15 11:48:12.037639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.037644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.037647] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.037656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.785 [2024-11-15 11:48:12.037671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.037900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.037907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.037910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.037914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.037920] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:24:46.785 [2024-11-15 11:48:12.037934] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:24:46.785 [2024-11-15 11:48:12.037942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.037946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.037949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.037956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.785 [2024-11-15 11:48:12.037968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.038042] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.038048] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.038051] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038055] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.038061] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:24:46.785 [2024-11-15 11:48:12.038069] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:46.785 [2024-11-15 11:48:12.038076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.038090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.785 [2024-11-15 11:48:12.038101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.038180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.038186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.038189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.038199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:46.785 [2024-11-15 11:48:12.038209] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038213] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.038223] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.785 [2024-11-15 11:48:12.038234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.038303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.038310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.038313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.038322] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:46.785 [2024-11-15 11:48:12.038327] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:46.785 [2024-11-15 11:48:12.038337] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:46.785 [2024-11-15 11:48:12.038447] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:24:46.785 [2024-11-15 11:48:12.038452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:46.785 [2024-11-15 11:48:12.038461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.038476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.785 [2024-11-15 11:48:12.038487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.038578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.038585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.038588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038592] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.038597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:46.785 [2024-11-15 11:48:12.038607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.038621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.785 [2024-11-15 11:48:12.038632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.785 [2024-11-15 11:48:12.038699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.785 [2024-11-15 11:48:12.038705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.785 [2024-11-15 11:48:12.038709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038713] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.785 [2024-11-15 11:48:12.038717] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:46.785 [2024-11-15 11:48:12.038722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:46.785 [2024-11-15 11:48:12.038731] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:24:46.785 [2024-11-15 11:48:12.038743] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:46.785 [2024-11-15 11:48:12.038752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.785 [2024-11-15 11:48:12.038756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.785 [2024-11-15 11:48:12.038763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.786 [2024-11-15 11:48:12.038774] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.786 [2024-11-15 11:48:12.038897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.786 [2024-11-15 11:48:12.038906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.786 [2024-11-15 11:48:12.038910] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.038915] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x138c690): datao=0, datal=4096, cccid=0 00:24:46.786 [2024-11-15 11:48:12.038920] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ee100) on tqpair(0x138c690): expected_datao=0, payload_size=4096 00:24:46.786 [2024-11-15 11:48:12.038924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.038933] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.038938] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.079723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.786 [2024-11-15 11:48:12.079737] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.786 [2024-11-15 11:48:12.079740] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.079745] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.786 [2024-11-15 11:48:12.079754] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:24:46.786 [2024-11-15 11:48:12.079760] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:24:46.786 [2024-11-15 11:48:12.079764] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:24:46.786 [2024-11-15 11:48:12.079774] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:24:46.786 [2024-11-15 11:48:12.079780] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:24:46.786 [2024-11-15 11:48:12.079785] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:24:46.786 [2024-11-15 11:48:12.079796] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:46.786 [2024-11-15 11:48:12.079804] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.079809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.079812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.079822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:46.786 [2024-11-15 11:48:12.079836] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.786 [2024-11-15 11:48:12.080043] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.786 [2024-11-15 11:48:12.080049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.786 [2024-11-15 11:48:12.080053] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080057] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.786 [2024-11-15 11:48:12.080065] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080069] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080072] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.786 [2024-11-15 11:48:12.080085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.786 [2024-11-15 11:48:12.080108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080111] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080115] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.786 [2024-11-15 11:48:12.080127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.786 [2024-11-15 11:48:12.080145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:46.786 [2024-11-15 11:48:12.080154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:46.786 [2024-11-15 11:48:12.080160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080171] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.786 [2024-11-15 11:48:12.080183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee100, cid 0, qid 0 00:24:46.786 [2024-11-15 11:48:12.080189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee280, cid 1, qid 0 00:24:46.786 [2024-11-15 11:48:12.080194] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee400, cid 2, qid 0 00:24:46.786 [2024-11-15 11:48:12.080199] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.786 [2024-11-15 11:48:12.080203] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee700, cid 4, qid 0 00:24:46.786 [2024-11-15 11:48:12.080461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.786 [2024-11-15 11:48:12.080468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.786 [2024-11-15 11:48:12.080471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080475] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee700) on tqpair=0x138c690 00:24:46.786 [2024-11-15 11:48:12.080483] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:24:46.786 [2024-11-15 11:48:12.080489] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:24:46.786 [2024-11-15 11:48:12.080501] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.786 [2024-11-15 11:48:12.080523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee700, cid 4, qid 0 00:24:46.786 [2024-11-15 11:48:12.080714] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.786 [2024-11-15 11:48:12.080721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.786 [2024-11-15 11:48:12.080725] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080729] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x138c690): datao=0, datal=4096, cccid=4 00:24:46.786 [2024-11-15 11:48:12.080737] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ee700) on tqpair(0x138c690): expected_datao=0, payload_size=4096 00:24:46.786 [2024-11-15 11:48:12.080741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080749] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080753] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.786 [2024-11-15 11:48:12.080925] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.786 [2024-11-15 11:48:12.080929] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee700) on tqpair=0x138c690 00:24:46.786 [2024-11-15 11:48:12.080946] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:24:46.786 [2024-11-15 11:48:12.080974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.080985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.786 [2024-11-15 11:48:12.080992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.080996] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.081000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x138c690) 00:24:46.786 [2024-11-15 11:48:12.081006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:46.786 [2024-11-15 11:48:12.081021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee700, cid 4, qid 0 00:24:46.786 [2024-11-15 11:48:12.081027] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee880, cid 5, qid 0 00:24:46.786 [2024-11-15 11:48:12.081291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.786 [2024-11-15 11:48:12.081299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.786 [2024-11-15 11:48:12.081302] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.081306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x138c690): datao=0, datal=1024, cccid=4 00:24:46.786 [2024-11-15 11:48:12.081310] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ee700) on tqpair(0x138c690): expected_datao=0, payload_size=1024 00:24:46.786 [2024-11-15 11:48:12.081315] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.081321] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.081325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.081331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.786 [2024-11-15 11:48:12.081337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.786 [2024-11-15 11:48:12.081340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.786 [2024-11-15 11:48:12.081344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee880) on tqpair=0x138c690 00:24:46.786 [2024-11-15 11:48:12.125576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.787 [2024-11-15 11:48:12.125590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.787 [2024-11-15 11:48:12.125594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.125600] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee700) on tqpair=0x138c690 00:24:46.787 [2024-11-15 11:48:12.125614] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.125619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x138c690) 00:24:46.787 [2024-11-15 11:48:12.125632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.787 [2024-11-15 11:48:12.125651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee700, cid 4, qid 0 00:24:46.787 [2024-11-15 11:48:12.125909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.787 [2024-11-15 11:48:12.125918] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.787 [2024-11-15 11:48:12.125922] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.125925] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x138c690): datao=0, datal=3072, cccid=4 00:24:46.787 [2024-11-15 11:48:12.125930] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ee700) on tqpair(0x138c690): expected_datao=0, payload_size=3072 00:24:46.787 [2024-11-15 11:48:12.125934] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.125941] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.125945] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.126054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.787 [2024-11-15 11:48:12.126063] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.787 [2024-11-15 11:48:12.126067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.126071] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee700) on tqpair=0x138c690 00:24:46.787 [2024-11-15 11:48:12.126079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.126083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x138c690) 00:24:46.787 [2024-11-15 11:48:12.126090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.787 [2024-11-15 11:48:12.126104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee700, cid 4, qid 0 00:24:46.787 [2024-11-15 11:48:12.126327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:46.787 [2024-11-15 11:48:12.126335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:46.787 [2024-11-15 11:48:12.126338] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.126342] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x138c690): datao=0, datal=8, cccid=4 00:24:46.787 [2024-11-15 11:48:12.126347] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x13ee700) on tqpair(0x138c690): expected_datao=0, payload_size=8 00:24:46.787 [2024-11-15 11:48:12.126351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.126357] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.126361] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.166761] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.787 [2024-11-15 11:48:12.166776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.787 [2024-11-15 11:48:12.166780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.787 [2024-11-15 11:48:12.166785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee700) on tqpair=0x138c690 00:24:46.787 ===================================================== 00:24:46.787 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:46.787 ===================================================== 00:24:46.787 Controller Capabilities/Features 00:24:46.787 ================================ 00:24:46.787 Vendor ID: 0000 00:24:46.787 Subsystem Vendor ID: 0000 00:24:46.787 Serial Number: .................... 00:24:46.787 Model Number: ........................................ 00:24:46.787 Firmware Version: 25.01 00:24:46.787 Recommended Arb Burst: 0 00:24:46.787 IEEE OUI Identifier: 00 00 00 00:24:46.787 Multi-path I/O 00:24:46.787 May have multiple subsystem ports: No 00:24:46.787 May have multiple controllers: No 00:24:46.787 Associated with SR-IOV VF: No 00:24:46.787 Max Data Transfer Size: 131072 00:24:46.787 Max Number of Namespaces: 0 00:24:46.787 Max Number of I/O Queues: 1024 00:24:46.787 NVMe Specification Version (VS): 1.3 00:24:46.787 NVMe Specification Version (Identify): 1.3 00:24:46.787 Maximum Queue Entries: 128 00:24:46.787 Contiguous Queues Required: Yes 00:24:46.787 Arbitration Mechanisms Supported 00:24:46.787 Weighted Round Robin: Not Supported 00:24:46.787 Vendor Specific: Not Supported 00:24:46.787 Reset Timeout: 15000 ms 00:24:46.787 Doorbell Stride: 4 bytes 00:24:46.787 NVM Subsystem Reset: Not Supported 00:24:46.787 Command Sets Supported 00:24:46.787 NVM Command Set: Supported 00:24:46.787 Boot Partition: Not Supported 00:24:46.787 Memory Page Size Minimum: 4096 bytes 00:24:46.787 Memory Page Size Maximum: 4096 bytes 00:24:46.787 Persistent Memory Region: Not Supported 00:24:46.787 Optional Asynchronous Events Supported 00:24:46.787 Namespace Attribute Notices: Not Supported 00:24:46.787 Firmware Activation Notices: Not Supported 00:24:46.787 ANA Change Notices: Not Supported 00:24:46.787 PLE Aggregate Log Change Notices: Not Supported 00:24:46.787 LBA Status Info Alert Notices: Not Supported 00:24:46.787 EGE Aggregate Log Change Notices: Not Supported 00:24:46.787 Normal NVM Subsystem Shutdown event: Not Supported 00:24:46.787 Zone Descriptor Change Notices: Not Supported 00:24:46.787 Discovery Log Change Notices: Supported 00:24:46.787 Controller Attributes 00:24:46.787 128-bit Host Identifier: Not Supported 00:24:46.787 Non-Operational Permissive Mode: Not Supported 00:24:46.787 NVM Sets: Not Supported 00:24:46.787 Read Recovery Levels: Not Supported 00:24:46.787 Endurance Groups: Not Supported 00:24:46.787 Predictable Latency Mode: Not Supported 00:24:46.787 Traffic Based Keep ALive: Not Supported 00:24:46.787 Namespace Granularity: Not Supported 00:24:46.787 SQ Associations: Not Supported 00:24:46.787 UUID List: Not Supported 00:24:46.787 Multi-Domain Subsystem: Not Supported 00:24:46.787 Fixed Capacity Management: Not Supported 00:24:46.787 Variable Capacity Management: Not Supported 00:24:46.787 Delete Endurance Group: Not Supported 00:24:46.787 Delete NVM Set: Not Supported 00:24:46.787 Extended LBA Formats Supported: Not Supported 00:24:46.787 Flexible Data Placement Supported: Not Supported 00:24:46.787 00:24:46.787 Controller Memory Buffer Support 00:24:46.787 ================================ 00:24:46.787 Supported: No 00:24:46.787 00:24:46.787 Persistent Memory Region Support 00:24:46.787 ================================ 00:24:46.787 Supported: No 00:24:46.787 00:24:46.787 Admin Command Set Attributes 00:24:46.787 ============================ 00:24:46.787 Security Send/Receive: Not Supported 00:24:46.787 Format NVM: Not Supported 00:24:46.787 Firmware Activate/Download: Not Supported 00:24:46.787 Namespace Management: Not Supported 00:24:46.787 Device Self-Test: Not Supported 00:24:46.787 Directives: Not Supported 00:24:46.787 NVMe-MI: Not Supported 00:24:46.787 Virtualization Management: Not Supported 00:24:46.787 Doorbell Buffer Config: Not Supported 00:24:46.787 Get LBA Status Capability: Not Supported 00:24:46.787 Command & Feature Lockdown Capability: Not Supported 00:24:46.787 Abort Command Limit: 1 00:24:46.787 Async Event Request Limit: 4 00:24:46.787 Number of Firmware Slots: N/A 00:24:46.787 Firmware Slot 1 Read-Only: N/A 00:24:46.787 Firmware Activation Without Reset: N/A 00:24:46.787 Multiple Update Detection Support: N/A 00:24:46.787 Firmware Update Granularity: No Information Provided 00:24:46.787 Per-Namespace SMART Log: No 00:24:46.788 Asymmetric Namespace Access Log Page: Not Supported 00:24:46.788 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:46.788 Command Effects Log Page: Not Supported 00:24:46.788 Get Log Page Extended Data: Supported 00:24:46.788 Telemetry Log Pages: Not Supported 00:24:46.788 Persistent Event Log Pages: Not Supported 00:24:46.788 Supported Log Pages Log Page: May Support 00:24:46.788 Commands Supported & Effects Log Page: Not Supported 00:24:46.788 Feature Identifiers & Effects Log Page:May Support 00:24:46.788 NVMe-MI Commands & Effects Log Page: May Support 00:24:46.788 Data Area 4 for Telemetry Log: Not Supported 00:24:46.788 Error Log Page Entries Supported: 128 00:24:46.788 Keep Alive: Not Supported 00:24:46.788 00:24:46.788 NVM Command Set Attributes 00:24:46.788 ========================== 00:24:46.788 Submission Queue Entry Size 00:24:46.788 Max: 1 00:24:46.788 Min: 1 00:24:46.788 Completion Queue Entry Size 00:24:46.788 Max: 1 00:24:46.788 Min: 1 00:24:46.788 Number of Namespaces: 0 00:24:46.788 Compare Command: Not Supported 00:24:46.788 Write Uncorrectable Command: Not Supported 00:24:46.788 Dataset Management Command: Not Supported 00:24:46.788 Write Zeroes Command: Not Supported 00:24:46.788 Set Features Save Field: Not Supported 00:24:46.788 Reservations: Not Supported 00:24:46.788 Timestamp: Not Supported 00:24:46.788 Copy: Not Supported 00:24:46.788 Volatile Write Cache: Not Present 00:24:46.788 Atomic Write Unit (Normal): 1 00:24:46.788 Atomic Write Unit (PFail): 1 00:24:46.788 Atomic Compare & Write Unit: 1 00:24:46.788 Fused Compare & Write: Supported 00:24:46.788 Scatter-Gather List 00:24:46.788 SGL Command Set: Supported 00:24:46.788 SGL Keyed: Supported 00:24:46.788 SGL Bit Bucket Descriptor: Not Supported 00:24:46.788 SGL Metadata Pointer: Not Supported 00:24:46.788 Oversized SGL: Not Supported 00:24:46.788 SGL Metadata Address: Not Supported 00:24:46.788 SGL Offset: Supported 00:24:46.788 Transport SGL Data Block: Not Supported 00:24:46.788 Replay Protected Memory Block: Not Supported 00:24:46.788 00:24:46.788 Firmware Slot Information 00:24:46.788 ========================= 00:24:46.788 Active slot: 0 00:24:46.788 00:24:46.788 00:24:46.788 Error Log 00:24:46.788 ========= 00:24:46.788 00:24:46.788 Active Namespaces 00:24:46.788 ================= 00:24:46.788 Discovery Log Page 00:24:46.788 ================== 00:24:46.788 Generation Counter: 2 00:24:46.788 Number of Records: 2 00:24:46.788 Record Format: 0 00:24:46.788 00:24:46.788 Discovery Log Entry 0 00:24:46.788 ---------------------- 00:24:46.788 Transport Type: 3 (TCP) 00:24:46.788 Address Family: 1 (IPv4) 00:24:46.788 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:46.788 Entry Flags: 00:24:46.788 Duplicate Returned Information: 1 00:24:46.788 Explicit Persistent Connection Support for Discovery: 1 00:24:46.788 Transport Requirements: 00:24:46.788 Secure Channel: Not Required 00:24:46.788 Port ID: 0 (0x0000) 00:24:46.788 Controller ID: 65535 (0xffff) 00:24:46.788 Admin Max SQ Size: 128 00:24:46.788 Transport Service Identifier: 4420 00:24:46.788 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:46.788 Transport Address: 10.0.0.2 00:24:46.788 Discovery Log Entry 1 00:24:46.788 ---------------------- 00:24:46.788 Transport Type: 3 (TCP) 00:24:46.788 Address Family: 1 (IPv4) 00:24:46.788 Subsystem Type: 2 (NVM Subsystem) 00:24:46.788 Entry Flags: 00:24:46.788 Duplicate Returned Information: 0 00:24:46.788 Explicit Persistent Connection Support for Discovery: 0 00:24:46.788 Transport Requirements: 00:24:46.788 Secure Channel: Not Required 00:24:46.788 Port ID: 0 (0x0000) 00:24:46.788 Controller ID: 65535 (0xffff) 00:24:46.788 Admin Max SQ Size: 128 00:24:46.788 Transport Service Identifier: 4420 00:24:46.788 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:46.788 Transport Address: 10.0.0.2 [2024-11-15 11:48:12.166895] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:24:46.788 [2024-11-15 11:48:12.166907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee100) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.166915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-11-15 11:48:12.166921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee280) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.166926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-11-15 11:48:12.166933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee400) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.166938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-11-15 11:48:12.166943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.166948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:46.788 [2024-11-15 11:48:12.166960] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.166964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.166968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.788 [2024-11-15 11:48:12.166977] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.788 [2024-11-15 11:48:12.166995] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.788 [2024-11-15 11:48:12.167193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.788 [2024-11-15 11:48:12.167199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.788 [2024-11-15 11:48:12.167203] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167207] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.167214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.788 [2024-11-15 11:48:12.167228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.788 [2024-11-15 11:48:12.167242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.788 [2024-11-15 11:48:12.167458] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.788 [2024-11-15 11:48:12.167465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.788 [2024-11-15 11:48:12.167468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.167477] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:24:46.788 [2024-11-15 11:48:12.167482] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:24:46.788 [2024-11-15 11:48:12.167492] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.788 [2024-11-15 11:48:12.167506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.788 [2024-11-15 11:48:12.167517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.788 [2024-11-15 11:48:12.167717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.788 [2024-11-15 11:48:12.167724] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.788 [2024-11-15 11:48:12.167727] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.167741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167745] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.788 [2024-11-15 11:48:12.167759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.788 [2024-11-15 11:48:12.167770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.788 [2024-11-15 11:48:12.167945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.788 [2024-11-15 11:48:12.167951] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.788 [2024-11-15 11:48:12.167955] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167959] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.788 [2024-11-15 11:48:12.167968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.788 [2024-11-15 11:48:12.167976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.788 [2024-11-15 11:48:12.167983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.788 [2024-11-15 11:48:12.167994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.788 [2024-11-15 11:48:12.168161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.788 [2024-11-15 11:48:12.168167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.168171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.168184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.168198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.168209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.168380] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.168386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.168390] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168393] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.168403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168407] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.168418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.168428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.168602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.168608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.168612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.168626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168630] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168633] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.168643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.168654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.168882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.168888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.168891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.168906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.168914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.168921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.168932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.169159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.169166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.169170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.169174] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.169184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.169188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.169192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.169199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.169210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.169381] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.169387] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.169391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.169394] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.169404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.169409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.169412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.169419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.169430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.173573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.173583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.173587] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.173590] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.173601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.173605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.173609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x138c690) 00:24:46.789 [2024-11-15 11:48:12.173616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.789 [2024-11-15 11:48:12.173631] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x13ee580, cid 3, qid 0 00:24:46.789 [2024-11-15 11:48:12.173820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:46.789 [2024-11-15 11:48:12.173826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:46.789 [2024-11-15 11:48:12.173830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:46.789 [2024-11-15 11:48:12.173833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x13ee580) on tqpair=0x138c690 00:24:46.789 [2024-11-15 11:48:12.173841] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:24:46.789 00:24:46.789 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:46.789 [2024-11-15 11:48:12.222536] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:46.789 [2024-11-15 11:48:12.222590] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1169103 ] 00:24:47.053 [2024-11-15 11:48:12.281929] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:24:47.053 [2024-11-15 11:48:12.281990] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:47.053 [2024-11-15 11:48:12.281995] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:47.053 [2024-11-15 11:48:12.282013] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:47.053 [2024-11-15 11:48:12.282026] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:47.053 [2024-11-15 11:48:12.285872] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:24:47.053 [2024-11-15 11:48:12.285911] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x242c690 0 00:24:47.053 [2024-11-15 11:48:12.293582] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:47.053 [2024-11-15 11:48:12.293596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:47.053 [2024-11-15 11:48:12.293601] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:47.053 [2024-11-15 11:48:12.293604] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:47.053 [2024-11-15 11:48:12.293641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.293647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.293651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.293666] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:47.053 [2024-11-15 11:48:12.293690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.300576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.053 [2024-11-15 11:48:12.300586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.053 [2024-11-15 11:48:12.300590] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.300595] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.053 [2024-11-15 11:48:12.300604] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:47.053 [2024-11-15 11:48:12.300612] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:24:47.053 [2024-11-15 11:48:12.300622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:24:47.053 [2024-11-15 11:48:12.300637] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.300641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.300645] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.300654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.053 [2024-11-15 11:48:12.300671] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.300869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.053 [2024-11-15 11:48:12.300876] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.053 [2024-11-15 11:48:12.300879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.300883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.053 [2024-11-15 11:48:12.300888] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:24:47.053 [2024-11-15 11:48:12.300896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:24:47.053 [2024-11-15 11:48:12.300903] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.300907] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.300911] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.300918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.053 [2024-11-15 11:48:12.300929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.301146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.053 [2024-11-15 11:48:12.301152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.053 [2024-11-15 11:48:12.301156] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301160] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.053 [2024-11-15 11:48:12.301165] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:24:47.053 [2024-11-15 11:48:12.301173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:24:47.053 [2024-11-15 11:48:12.301180] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301183] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301187] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.301194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.053 [2024-11-15 11:48:12.301205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.301377] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.053 [2024-11-15 11:48:12.301383] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.053 [2024-11-15 11:48:12.301387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.053 [2024-11-15 11:48:12.301396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:47.053 [2024-11-15 11:48:12.301406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.301423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.053 [2024-11-15 11:48:12.301434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.301623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.053 [2024-11-15 11:48:12.301630] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.053 [2024-11-15 11:48:12.301633] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.053 [2024-11-15 11:48:12.301642] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:24:47.053 [2024-11-15 11:48:12.301647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:24:47.053 [2024-11-15 11:48:12.301655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:47.053 [2024-11-15 11:48:12.301764] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:24:47.053 [2024-11-15 11:48:12.301769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:47.053 [2024-11-15 11:48:12.301777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.301785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.301791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.053 [2024-11-15 11:48:12.301803] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.301999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.053 [2024-11-15 11:48:12.302006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.053 [2024-11-15 11:48:12.302009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.302013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.053 [2024-11-15 11:48:12.302017] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:47.053 [2024-11-15 11:48:12.302027] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.302031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.053 [2024-11-15 11:48:12.302035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.053 [2024-11-15 11:48:12.302042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.053 [2024-11-15 11:48:12.302052] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.053 [2024-11-15 11:48:12.302220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.054 [2024-11-15 11:48:12.302226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.054 [2024-11-15 11:48:12.302230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.302234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.054 [2024-11-15 11:48:12.302238] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:47.054 [2024-11-15 11:48:12.302245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.302253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:24:47.054 [2024-11-15 11:48:12.302262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.302271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.302275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.302282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.054 [2024-11-15 11:48:12.302293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.054 [2024-11-15 11:48:12.302506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.054 [2024-11-15 11:48:12.302512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.054 [2024-11-15 11:48:12.302516] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.302520] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=4096, cccid=0 00:24:47.054 [2024-11-15 11:48:12.302525] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e100) on tqpair(0x242c690): expected_datao=0, payload_size=4096 00:24:47.054 [2024-11-15 11:48:12.302529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.302537] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.302541] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.342730] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.054 [2024-11-15 11:48:12.342742] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.054 [2024-11-15 11:48:12.342746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.342750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.054 [2024-11-15 11:48:12.342760] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:24:47.054 [2024-11-15 11:48:12.342765] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:24:47.054 [2024-11-15 11:48:12.342769] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:24:47.054 [2024-11-15 11:48:12.342781] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:24:47.054 [2024-11-15 11:48:12.342786] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:24:47.054 [2024-11-15 11:48:12.342791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.342802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.342810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.342814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.342817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.342826] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.054 [2024-11-15 11:48:12.342839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.054 [2024-11-15 11:48:12.343000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.054 [2024-11-15 11:48:12.343007] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.054 [2024-11-15 11:48:12.343015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.054 [2024-11-15 11:48:12.343026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.343042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.054 [2024-11-15 11:48:12.343048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.343061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.054 [2024-11-15 11:48:12.343067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343071] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343074] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.343080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.054 [2024-11-15 11:48:12.343087] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343090] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343094] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.343100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.054 [2024-11-15 11:48:12.343104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.343113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.343119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.343130] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.054 [2024-11-15 11:48:12.343142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e100, cid 0, qid 0 00:24:47.054 [2024-11-15 11:48:12.343148] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e280, cid 1, qid 0 00:24:47.054 [2024-11-15 11:48:12.343153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e400, cid 2, qid 0 00:24:47.054 [2024-11-15 11:48:12.343158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e580, cid 3, qid 0 00:24:47.054 [2024-11-15 11:48:12.343162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.054 [2024-11-15 11:48:12.343419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.054 [2024-11-15 11:48:12.343425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.054 [2024-11-15 11:48:12.343429] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343432] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.054 [2024-11-15 11:48:12.343440] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:24:47.054 [2024-11-15 11:48:12.343448] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.343456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.343463] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.343469] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.343476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.343483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:47.054 [2024-11-15 11:48:12.343494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.054 [2024-11-15 11:48:12.347573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.054 [2024-11-15 11:48:12.347582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.054 [2024-11-15 11:48:12.347585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.347589] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.054 [2024-11-15 11:48:12.347659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.347671] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:47.054 [2024-11-15 11:48:12.347679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.347683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.054 [2024-11-15 11:48:12.347689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.054 [2024-11-15 11:48:12.347702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.054 [2024-11-15 11:48:12.347931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.054 [2024-11-15 11:48:12.347938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.054 [2024-11-15 11:48:12.347941] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.347945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=4096, cccid=4 00:24:47.054 [2024-11-15 11:48:12.347950] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e700) on tqpair(0x242c690): expected_datao=0, payload_size=4096 00:24:47.054 [2024-11-15 11:48:12.347954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.347974] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.347978] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.054 [2024-11-15 11:48:12.388793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.054 [2024-11-15 11:48:12.388804] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.054 [2024-11-15 11:48:12.388807] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.388812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.388823] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:24:47.055 [2024-11-15 11:48:12.388835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.388845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.388855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.388859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.388866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.388878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.055 [2024-11-15 11:48:12.389071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.055 [2024-11-15 11:48:12.389077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.055 [2024-11-15 11:48:12.389081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.389085] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=4096, cccid=4 00:24:47.055 [2024-11-15 11:48:12.389089] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e700) on tqpair(0x242c690): expected_datao=0, payload_size=4096 00:24:47.055 [2024-11-15 11:48:12.389094] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.389107] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.389111] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.433580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.433593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.433597] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.433601] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.433620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.433631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.433639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.433643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.433651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.433665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.055 [2024-11-15 11:48:12.433846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.055 [2024-11-15 11:48:12.433854] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.055 [2024-11-15 11:48:12.433857] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.433861] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=4096, cccid=4 00:24:47.055 [2024-11-15 11:48:12.433866] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e700) on tqpair(0x242c690): expected_datao=0, payload_size=4096 00:24:47.055 [2024-11-15 11:48:12.433871] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.433885] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.433890] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.474747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.474758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.474762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.474766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.474775] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474804] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474810] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474815] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474820] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:24:47.055 [2024-11-15 11:48:12.474825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:24:47.055 [2024-11-15 11:48:12.474830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:24:47.055 [2024-11-15 11:48:12.474849] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.474852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.474860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.474867] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.474871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.474875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.474881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.055 [2024-11-15 11:48:12.474898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.055 [2024-11-15 11:48:12.474903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e880, cid 5, qid 0 00:24:47.055 [2024-11-15 11:48:12.475072] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.475079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.475083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.475094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.475100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.475103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e880) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.475117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475121] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.475128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.475139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e880, cid 5, qid 0 00:24:47.055 [2024-11-15 11:48:12.475354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.475364] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.475368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e880) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.475383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.475393] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.475404] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e880, cid 5, qid 0 00:24:47.055 [2024-11-15 11:48:12.475624] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.475632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.475636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e880) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.475649] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475652] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.475659] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.475670] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e880, cid 5, qid 0 00:24:47.055 [2024-11-15 11:48:12.475925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.055 [2024-11-15 11:48:12.475931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.055 [2024-11-15 11:48:12.475934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e880) on tqpair=0x242c690 00:24:47.055 [2024-11-15 11:48:12.475955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475959] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.475966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.475973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.475983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.475991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.475994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x242c690) 00:24:47.055 [2024-11-15 11:48:12.476000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.055 [2024-11-15 11:48:12.476008] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.055 [2024-11-15 11:48:12.476012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x242c690) 00:24:47.056 [2024-11-15 11:48:12.476018] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.056 [2024-11-15 11:48:12.476030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e880, cid 5, qid 0 00:24:47.056 [2024-11-15 11:48:12.476035] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e700, cid 4, qid 0 00:24:47.056 [2024-11-15 11:48:12.476040] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248ea00, cid 6, qid 0 00:24:47.056 [2024-11-15 11:48:12.476045] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248eb80, cid 7, qid 0 00:24:47.056 [2024-11-15 11:48:12.476356] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.056 [2024-11-15 11:48:12.476363] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.056 [2024-11-15 11:48:12.476366] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476370] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=8192, cccid=5 00:24:47.056 [2024-11-15 11:48:12.476375] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e880) on tqpair(0x242c690): expected_datao=0, payload_size=8192 00:24:47.056 [2024-11-15 11:48:12.476379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476451] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476455] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.056 [2024-11-15 11:48:12.476466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.056 [2024-11-15 11:48:12.476470] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476473] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=512, cccid=4 00:24:47.056 [2024-11-15 11:48:12.476478] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248e700) on tqpair(0x242c690): expected_datao=0, payload_size=512 00:24:47.056 [2024-11-15 11:48:12.476482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476489] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476492] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.056 [2024-11-15 11:48:12.476503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.056 [2024-11-15 11:48:12.476507] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476510] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=512, cccid=6 00:24:47.056 [2024-11-15 11:48:12.476515] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248ea00) on tqpair(0x242c690): expected_datao=0, payload_size=512 00:24:47.056 [2024-11-15 11:48:12.476519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476525] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476529] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:47.056 [2024-11-15 11:48:12.476540] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:47.056 [2024-11-15 11:48:12.476543] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476547] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x242c690): datao=0, datal=4096, cccid=7 00:24:47.056 [2024-11-15 11:48:12.476551] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x248eb80) on tqpair(0x242c690): expected_datao=0, payload_size=4096 00:24:47.056 [2024-11-15 11:48:12.476555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476570] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476574] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476589] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.056 [2024-11-15 11:48:12.476595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.056 [2024-11-15 11:48:12.476598] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e880) on tqpair=0x242c690 00:24:47.056 [2024-11-15 11:48:12.476618] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.056 [2024-11-15 11:48:12.476625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.056 [2024-11-15 11:48:12.476630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476634] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e700) on tqpair=0x242c690 00:24:47.056 [2024-11-15 11:48:12.476645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.056 [2024-11-15 11:48:12.476651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.056 [2024-11-15 11:48:12.476654] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476658] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248ea00) on tqpair=0x242c690 00:24:47.056 [2024-11-15 11:48:12.476665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.056 [2024-11-15 11:48:12.476671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.056 [2024-11-15 11:48:12.476674] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.056 [2024-11-15 11:48:12.476678] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248eb80) on tqpair=0x242c690 00:24:47.056 ===================================================== 00:24:47.056 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:47.056 ===================================================== 00:24:47.056 Controller Capabilities/Features 00:24:47.056 ================================ 00:24:47.056 Vendor ID: 8086 00:24:47.056 Subsystem Vendor ID: 8086 00:24:47.056 Serial Number: SPDK00000000000001 00:24:47.056 Model Number: SPDK bdev Controller 00:24:47.056 Firmware Version: 25.01 00:24:47.056 Recommended Arb Burst: 6 00:24:47.056 IEEE OUI Identifier: e4 d2 5c 00:24:47.056 Multi-path I/O 00:24:47.056 May have multiple subsystem ports: Yes 00:24:47.056 May have multiple controllers: Yes 00:24:47.056 Associated with SR-IOV VF: No 00:24:47.056 Max Data Transfer Size: 131072 00:24:47.056 Max Number of Namespaces: 32 00:24:47.056 Max Number of I/O Queues: 127 00:24:47.056 NVMe Specification Version (VS): 1.3 00:24:47.056 NVMe Specification Version (Identify): 1.3 00:24:47.056 Maximum Queue Entries: 128 00:24:47.056 Contiguous Queues Required: Yes 00:24:47.056 Arbitration Mechanisms Supported 00:24:47.056 Weighted Round Robin: Not Supported 00:24:47.056 Vendor Specific: Not Supported 00:24:47.056 Reset Timeout: 15000 ms 00:24:47.056 Doorbell Stride: 4 bytes 00:24:47.056 NVM Subsystem Reset: Not Supported 00:24:47.056 Command Sets Supported 00:24:47.056 NVM Command Set: Supported 00:24:47.056 Boot Partition: Not Supported 00:24:47.056 Memory Page Size Minimum: 4096 bytes 00:24:47.056 Memory Page Size Maximum: 4096 bytes 00:24:47.056 Persistent Memory Region: Not Supported 00:24:47.056 Optional Asynchronous Events Supported 00:24:47.056 Namespace Attribute Notices: Supported 00:24:47.056 Firmware Activation Notices: Not Supported 00:24:47.056 ANA Change Notices: Not Supported 00:24:47.056 PLE Aggregate Log Change Notices: Not Supported 00:24:47.056 LBA Status Info Alert Notices: Not Supported 00:24:47.056 EGE Aggregate Log Change Notices: Not Supported 00:24:47.056 Normal NVM Subsystem Shutdown event: Not Supported 00:24:47.056 Zone Descriptor Change Notices: Not Supported 00:24:47.056 Discovery Log Change Notices: Not Supported 00:24:47.056 Controller Attributes 00:24:47.056 128-bit Host Identifier: Supported 00:24:47.056 Non-Operational Permissive Mode: Not Supported 00:24:47.056 NVM Sets: Not Supported 00:24:47.056 Read Recovery Levels: Not Supported 00:24:47.056 Endurance Groups: Not Supported 00:24:47.056 Predictable Latency Mode: Not Supported 00:24:47.056 Traffic Based Keep ALive: Not Supported 00:24:47.056 Namespace Granularity: Not Supported 00:24:47.056 SQ Associations: Not Supported 00:24:47.056 UUID List: Not Supported 00:24:47.056 Multi-Domain Subsystem: Not Supported 00:24:47.056 Fixed Capacity Management: Not Supported 00:24:47.056 Variable Capacity Management: Not Supported 00:24:47.056 Delete Endurance Group: Not Supported 00:24:47.056 Delete NVM Set: Not Supported 00:24:47.056 Extended LBA Formats Supported: Not Supported 00:24:47.056 Flexible Data Placement Supported: Not Supported 00:24:47.056 00:24:47.056 Controller Memory Buffer Support 00:24:47.056 ================================ 00:24:47.056 Supported: No 00:24:47.056 00:24:47.056 Persistent Memory Region Support 00:24:47.056 ================================ 00:24:47.056 Supported: No 00:24:47.056 00:24:47.056 Admin Command Set Attributes 00:24:47.056 ============================ 00:24:47.056 Security Send/Receive: Not Supported 00:24:47.056 Format NVM: Not Supported 00:24:47.056 Firmware Activate/Download: Not Supported 00:24:47.056 Namespace Management: Not Supported 00:24:47.056 Device Self-Test: Not Supported 00:24:47.056 Directives: Not Supported 00:24:47.056 NVMe-MI: Not Supported 00:24:47.056 Virtualization Management: Not Supported 00:24:47.056 Doorbell Buffer Config: Not Supported 00:24:47.056 Get LBA Status Capability: Not Supported 00:24:47.056 Command & Feature Lockdown Capability: Not Supported 00:24:47.056 Abort Command Limit: 4 00:24:47.056 Async Event Request Limit: 4 00:24:47.056 Number of Firmware Slots: N/A 00:24:47.056 Firmware Slot 1 Read-Only: N/A 00:24:47.056 Firmware Activation Without Reset: N/A 00:24:47.056 Multiple Update Detection Support: N/A 00:24:47.056 Firmware Update Granularity: No Information Provided 00:24:47.056 Per-Namespace SMART Log: No 00:24:47.056 Asymmetric Namespace Access Log Page: Not Supported 00:24:47.056 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:47.056 Command Effects Log Page: Supported 00:24:47.056 Get Log Page Extended Data: Supported 00:24:47.056 Telemetry Log Pages: Not Supported 00:24:47.057 Persistent Event Log Pages: Not Supported 00:24:47.057 Supported Log Pages Log Page: May Support 00:24:47.057 Commands Supported & Effects Log Page: Not Supported 00:24:47.057 Feature Identifiers & Effects Log Page:May Support 00:24:47.057 NVMe-MI Commands & Effects Log Page: May Support 00:24:47.057 Data Area 4 for Telemetry Log: Not Supported 00:24:47.057 Error Log Page Entries Supported: 128 00:24:47.057 Keep Alive: Supported 00:24:47.057 Keep Alive Granularity: 10000 ms 00:24:47.057 00:24:47.057 NVM Command Set Attributes 00:24:47.057 ========================== 00:24:47.057 Submission Queue Entry Size 00:24:47.057 Max: 64 00:24:47.057 Min: 64 00:24:47.057 Completion Queue Entry Size 00:24:47.057 Max: 16 00:24:47.057 Min: 16 00:24:47.057 Number of Namespaces: 32 00:24:47.057 Compare Command: Supported 00:24:47.057 Write Uncorrectable Command: Not Supported 00:24:47.057 Dataset Management Command: Supported 00:24:47.057 Write Zeroes Command: Supported 00:24:47.057 Set Features Save Field: Not Supported 00:24:47.057 Reservations: Supported 00:24:47.057 Timestamp: Not Supported 00:24:47.057 Copy: Supported 00:24:47.057 Volatile Write Cache: Present 00:24:47.057 Atomic Write Unit (Normal): 1 00:24:47.057 Atomic Write Unit (PFail): 1 00:24:47.057 Atomic Compare & Write Unit: 1 00:24:47.057 Fused Compare & Write: Supported 00:24:47.057 Scatter-Gather List 00:24:47.057 SGL Command Set: Supported 00:24:47.057 SGL Keyed: Supported 00:24:47.057 SGL Bit Bucket Descriptor: Not Supported 00:24:47.057 SGL Metadata Pointer: Not Supported 00:24:47.057 Oversized SGL: Not Supported 00:24:47.057 SGL Metadata Address: Not Supported 00:24:47.057 SGL Offset: Supported 00:24:47.057 Transport SGL Data Block: Not Supported 00:24:47.057 Replay Protected Memory Block: Not Supported 00:24:47.057 00:24:47.057 Firmware Slot Information 00:24:47.057 ========================= 00:24:47.057 Active slot: 1 00:24:47.057 Slot 1 Firmware Revision: 25.01 00:24:47.057 00:24:47.057 00:24:47.057 Commands Supported and Effects 00:24:47.057 ============================== 00:24:47.057 Admin Commands 00:24:47.057 -------------- 00:24:47.057 Get Log Page (02h): Supported 00:24:47.057 Identify (06h): Supported 00:24:47.057 Abort (08h): Supported 00:24:47.057 Set Features (09h): Supported 00:24:47.057 Get Features (0Ah): Supported 00:24:47.057 Asynchronous Event Request (0Ch): Supported 00:24:47.057 Keep Alive (18h): Supported 00:24:47.057 I/O Commands 00:24:47.057 ------------ 00:24:47.057 Flush (00h): Supported LBA-Change 00:24:47.057 Write (01h): Supported LBA-Change 00:24:47.057 Read (02h): Supported 00:24:47.057 Compare (05h): Supported 00:24:47.057 Write Zeroes (08h): Supported LBA-Change 00:24:47.057 Dataset Management (09h): Supported LBA-Change 00:24:47.057 Copy (19h): Supported LBA-Change 00:24:47.057 00:24:47.057 Error Log 00:24:47.057 ========= 00:24:47.057 00:24:47.057 Arbitration 00:24:47.057 =========== 00:24:47.057 Arbitration Burst: 1 00:24:47.057 00:24:47.057 Power Management 00:24:47.057 ================ 00:24:47.057 Number of Power States: 1 00:24:47.057 Current Power State: Power State #0 00:24:47.057 Power State #0: 00:24:47.057 Max Power: 0.00 W 00:24:47.057 Non-Operational State: Operational 00:24:47.057 Entry Latency: Not Reported 00:24:47.057 Exit Latency: Not Reported 00:24:47.057 Relative Read Throughput: 0 00:24:47.057 Relative Read Latency: 0 00:24:47.057 Relative Write Throughput: 0 00:24:47.057 Relative Write Latency: 0 00:24:47.057 Idle Power: Not Reported 00:24:47.057 Active Power: Not Reported 00:24:47.057 Non-Operational Permissive Mode: Not Supported 00:24:47.057 00:24:47.057 Health Information 00:24:47.057 ================== 00:24:47.057 Critical Warnings: 00:24:47.057 Available Spare Space: OK 00:24:47.057 Temperature: OK 00:24:47.057 Device Reliability: OK 00:24:47.057 Read Only: No 00:24:47.057 Volatile Memory Backup: OK 00:24:47.057 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:47.057 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:47.057 Available Spare: 0% 00:24:47.057 Available Spare Threshold: 0% 00:24:47.057 Life Percentage Used:[2024-11-15 11:48:12.476787] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.476792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x242c690) 00:24:47.057 [2024-11-15 11:48:12.476799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.057 [2024-11-15 11:48:12.476811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248eb80, cid 7, qid 0 00:24:47.057 [2024-11-15 11:48:12.477026] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.057 [2024-11-15 11:48:12.477033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.057 [2024-11-15 11:48:12.477036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.477040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248eb80) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.477073] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:24:47.057 [2024-11-15 11:48:12.477083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e100) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.477089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.057 [2024-11-15 11:48:12.477094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e280) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.477099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.057 [2024-11-15 11:48:12.477104] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e400) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.477109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.057 [2024-11-15 11:48:12.477114] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e580) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.477119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.057 [2024-11-15 11:48:12.477127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.477131] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.477134] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242c690) 00:24:47.057 [2024-11-15 11:48:12.477141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.057 [2024-11-15 11:48:12.477154] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e580, cid 3, qid 0 00:24:47.057 [2024-11-15 11:48:12.477379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.057 [2024-11-15 11:48:12.477385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.057 [2024-11-15 11:48:12.477389] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.477395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e580) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.477402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.477406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.477409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242c690) 00:24:47.057 [2024-11-15 11:48:12.477416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.057 [2024-11-15 11:48:12.477430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e580, cid 3, qid 0 00:24:47.057 [2024-11-15 11:48:12.481576] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.057 [2024-11-15 11:48:12.481584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.057 [2024-11-15 11:48:12.481588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.057 [2024-11-15 11:48:12.481591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e580) on tqpair=0x242c690 00:24:47.057 [2024-11-15 11:48:12.481597] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:24:47.057 [2024-11-15 11:48:12.481602] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:24:47.057 [2024-11-15 11:48:12.481612] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:47.058 [2024-11-15 11:48:12.481616] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:47.058 [2024-11-15 11:48:12.481619] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x242c690) 00:24:47.058 [2024-11-15 11:48:12.481626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.058 [2024-11-15 11:48:12.481639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x248e580, cid 3, qid 0 00:24:47.058 [2024-11-15 11:48:12.481820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:47.058 [2024-11-15 11:48:12.481826] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:47.058 [2024-11-15 11:48:12.481830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:47.058 [2024-11-15 11:48:12.481834] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x248e580) on tqpair=0x242c690 00:24:47.058 [2024-11-15 11:48:12.481842] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:24:47.058 0% 00:24:47.058 Data Units Read: 0 00:24:47.058 Data Units Written: 0 00:24:47.058 Host Read Commands: 0 00:24:47.058 Host Write Commands: 0 00:24:47.058 Controller Busy Time: 0 minutes 00:24:47.058 Power Cycles: 0 00:24:47.058 Power On Hours: 0 hours 00:24:47.058 Unsafe Shutdowns: 0 00:24:47.058 Unrecoverable Media Errors: 0 00:24:47.058 Lifetime Error Log Entries: 0 00:24:47.058 Warning Temperature Time: 0 minutes 00:24:47.058 Critical Temperature Time: 0 minutes 00:24:47.058 00:24:47.058 Number of Queues 00:24:47.058 ================ 00:24:47.058 Number of I/O Submission Queues: 127 00:24:47.058 Number of I/O Completion Queues: 127 00:24:47.058 00:24:47.058 Active Namespaces 00:24:47.058 ================= 00:24:47.058 Namespace ID:1 00:24:47.058 Error Recovery Timeout: Unlimited 00:24:47.058 Command Set Identifier: NVM (00h) 00:24:47.058 Deallocate: Supported 00:24:47.058 Deallocated/Unwritten Error: Not Supported 00:24:47.058 Deallocated Read Value: Unknown 00:24:47.058 Deallocate in Write Zeroes: Not Supported 00:24:47.058 Deallocated Guard Field: 0xFFFF 00:24:47.058 Flush: Supported 00:24:47.058 Reservation: Supported 00:24:47.058 Namespace Sharing Capabilities: Multiple Controllers 00:24:47.058 Size (in LBAs): 131072 (0GiB) 00:24:47.058 Capacity (in LBAs): 131072 (0GiB) 00:24:47.058 Utilization (in LBAs): 131072 (0GiB) 00:24:47.058 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:47.058 EUI64: ABCDEF0123456789 00:24:47.058 UUID: 32d11c36-c83b-485e-b4cb-17c6987e5afe 00:24:47.058 Thin Provisioning: Not Supported 00:24:47.058 Per-NS Atomic Units: Yes 00:24:47.058 Atomic Boundary Size (Normal): 0 00:24:47.058 Atomic Boundary Size (PFail): 0 00:24:47.058 Atomic Boundary Offset: 0 00:24:47.058 Maximum Single Source Range Length: 65535 00:24:47.058 Maximum Copy Length: 65535 00:24:47.058 Maximum Source Range Count: 1 00:24:47.058 NGUID/EUI64 Never Reused: No 00:24:47.058 Namespace Write Protected: No 00:24:47.058 Number of LBA Formats: 1 00:24:47.058 Current LBA Format: LBA Format #00 00:24:47.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:47.058 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.058 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.058 rmmod nvme_tcp 00:24:47.058 rmmod nvme_fabrics 00:24:47.318 rmmod nvme_keyring 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1168777 ']' 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1168777 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 1168777 ']' 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 1168777 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1168777 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1168777' 00:24:47.318 killing process with pid 1168777 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 1168777 00:24:47.318 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 1168777 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.593 11:48:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.501 11:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:49.501 00:24:49.501 real 0m11.858s 00:24:49.501 user 0m9.098s 00:24:49.501 sys 0m6.271s 00:24:49.501 11:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:49.502 11:48:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:49.502 ************************************ 00:24:49.502 END TEST nvmf_identify 00:24:49.502 ************************************ 00:24:49.502 11:48:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:49.502 11:48:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:49.502 11:48:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:49.502 11:48:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.761 ************************************ 00:24:49.761 START TEST nvmf_perf 00:24:49.761 ************************************ 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:49.761 * Looking for test storage... 00:24:49.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:49.761 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:49.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.762 --rc genhtml_branch_coverage=1 00:24:49.762 --rc genhtml_function_coverage=1 00:24:49.762 --rc genhtml_legend=1 00:24:49.762 --rc geninfo_all_blocks=1 00:24:49.762 --rc geninfo_unexecuted_blocks=1 00:24:49.762 00:24:49.762 ' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:49.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.762 --rc genhtml_branch_coverage=1 00:24:49.762 --rc genhtml_function_coverage=1 00:24:49.762 --rc genhtml_legend=1 00:24:49.762 --rc geninfo_all_blocks=1 00:24:49.762 --rc geninfo_unexecuted_blocks=1 00:24:49.762 00:24:49.762 ' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:49.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.762 --rc genhtml_branch_coverage=1 00:24:49.762 --rc genhtml_function_coverage=1 00:24:49.762 --rc genhtml_legend=1 00:24:49.762 --rc geninfo_all_blocks=1 00:24:49.762 --rc geninfo_unexecuted_blocks=1 00:24:49.762 00:24:49.762 ' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:49.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:49.762 --rc genhtml_branch_coverage=1 00:24:49.762 --rc genhtml_function_coverage=1 00:24:49.762 --rc genhtml_legend=1 00:24:49.762 --rc geninfo_all_blocks=1 00:24:49.762 --rc geninfo_unexecuted_blocks=1 00:24:49.762 00:24:49.762 ' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:49.762 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:49.762 11:48:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:57.894 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:57.894 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:57.894 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:57.894 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:57.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:57.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:24:57.894 00:24:57.894 --- 10.0.0.2 ping statistics --- 00:24:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.894 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:24:57.894 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:57.894 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:57.894 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:24:57.894 00:24:57.894 --- 10.0.0.1 ping statistics --- 00:24:57.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:57.894 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1173282 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1173282 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 1173282 ']' 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:57.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:57.895 11:48:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:57.895 [2024-11-15 11:48:22.774869] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:24:57.895 [2024-11-15 11:48:22.774940] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:57.895 [2024-11-15 11:48:22.877305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:57.895 [2024-11-15 11:48:22.930773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:57.895 [2024-11-15 11:48:22.930825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:57.895 [2024-11-15 11:48:22.930834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:57.895 [2024-11-15 11:48:22.930841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:57.895 [2024-11-15 11:48:22.930847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:57.895 [2024-11-15 11:48:22.932976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:57.895 [2024-11-15 11:48:22.933135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:57.895 [2024-11-15 11:48:22.933298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:57.895 [2024-11-15 11:48:22.933299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:58.155 11:48:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:58.724 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:58.724 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:58.985 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:58.985 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:59.247 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:59.247 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:59.247 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:59.247 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:59.247 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:59.508 [2024-11-15 11:48:24.754277] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.508 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:59.508 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:59.508 11:48:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:59.769 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:59.769 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:00.032 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.293 [2024-11-15 11:48:25.537968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.293 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.293 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:00.293 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:00.293 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:00.293 11:48:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:01.677 Initializing NVMe Controllers 00:25:01.677 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:01.677 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:01.677 Initialization complete. Launching workers. 00:25:01.677 ======================================================== 00:25:01.677 Latency(us) 00:25:01.677 Device Information : IOPS MiB/s Average min max 00:25:01.677 PCIE (0000:65:00.0) NSID 1 from core 0: 77718.12 303.59 410.88 13.32 4969.90 00:25:01.677 ======================================================== 00:25:01.677 Total : 77718.12 303.59 410.88 13.32 4969.90 00:25:01.677 00:25:01.677 11:48:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.059 Initializing NVMe Controllers 00:25:03.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:03.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:03.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:03.059 Initialization complete. Launching workers. 00:25:03.059 ======================================================== 00:25:03.059 Latency(us) 00:25:03.059 Device Information : IOPS MiB/s Average min max 00:25:03.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.82 0.43 9339.39 118.22 45746.08 00:25:03.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 54.91 0.21 18211.31 7945.64 51878.36 00:25:03.059 ======================================================== 00:25:03.059 Total : 165.73 0.65 12278.88 118.22 51878.36 00:25:03.059 00:25:03.059 11:48:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:04.440 Initializing NVMe Controllers 00:25:04.440 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:04.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:04.440 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:04.440 Initialization complete. Launching workers. 00:25:04.440 ======================================================== 00:25:04.440 Latency(us) 00:25:04.440 Device Information : IOPS MiB/s Average min max 00:25:04.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11896.99 46.47 2696.05 418.87 6221.03 00:25:04.440 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3676.00 14.36 8744.30 7090.25 16160.33 00:25:04.440 ======================================================== 00:25:04.440 Total : 15572.98 60.83 4123.73 418.87 16160.33 00:25:04.440 00:25:04.440 11:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:04.440 11:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:04.440 11:48:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.983 Initializing NVMe Controllers 00:25:06.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.983 Controller IO queue size 128, less than required. 00:25:06.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.983 Controller IO queue size 128, less than required. 00:25:06.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:06.983 Initialization complete. Launching workers. 00:25:06.983 ======================================================== 00:25:06.983 Latency(us) 00:25:06.983 Device Information : IOPS MiB/s Average min max 00:25:06.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1924.63 481.16 67917.24 40721.54 111329.22 00:25:06.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 603.88 150.97 220544.26 48810.02 363610.06 00:25:06.983 ======================================================== 00:25:06.983 Total : 2528.51 632.13 104369.09 40721.54 363610.06 00:25:06.983 00:25:06.983 11:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:06.983 No valid NVMe controllers or AIO or URING devices found 00:25:06.983 Initializing NVMe Controllers 00:25:06.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.983 Controller IO queue size 128, less than required. 00:25:06.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.983 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:06.983 Controller IO queue size 128, less than required. 00:25:06.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:06.983 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:06.983 WARNING: Some requested NVMe devices were skipped 00:25:06.983 11:48:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:09.521 Initializing NVMe Controllers 00:25:09.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.521 Controller IO queue size 128, less than required. 00:25:09.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.521 Controller IO queue size 128, less than required. 00:25:09.521 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:09.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:09.521 Initialization complete. Launching workers. 00:25:09.521 00:25:09.521 ==================== 00:25:09.521 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:09.521 TCP transport: 00:25:09.521 polls: 38886 00:25:09.521 idle_polls: 23889 00:25:09.521 sock_completions: 14997 00:25:09.521 nvme_completions: 7239 00:25:09.521 submitted_requests: 10960 00:25:09.521 queued_requests: 1 00:25:09.521 00:25:09.521 ==================== 00:25:09.521 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:09.521 TCP transport: 00:25:09.521 polls: 39032 00:25:09.521 idle_polls: 23833 00:25:09.521 sock_completions: 15199 00:25:09.521 nvme_completions: 6897 00:25:09.521 submitted_requests: 10304 00:25:09.521 queued_requests: 1 00:25:09.521 ======================================================== 00:25:09.521 Latency(us) 00:25:09.522 Device Information : IOPS MiB/s Average min max 00:25:09.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1809.29 452.32 72428.20 39348.99 125401.43 00:25:09.522 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1723.80 430.95 75883.90 33937.83 132889.83 00:25:09.522 ======================================================== 00:25:09.522 Total : 3533.09 883.27 74114.25 33937.83 132889.83 00:25:09.522 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:09.522 rmmod nvme_tcp 00:25:09.522 rmmod nvme_fabrics 00:25:09.522 rmmod nvme_keyring 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1173282 ']' 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1173282 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 1173282 ']' 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 1173282 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:09.522 11:48:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1173282 00:25:09.780 11:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:09.780 11:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:09.780 11:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1173282' 00:25:09.780 killing process with pid 1173282 00:25:09.780 11:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 1173282 00:25:09.780 11:48:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 1173282 00:25:11.684 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.685 11:48:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.226 00:25:14.226 real 0m24.088s 00:25:14.226 user 0m57.850s 00:25:14.226 sys 0m8.509s 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:14.226 ************************************ 00:25:14.226 END TEST nvmf_perf 00:25:14.226 ************************************ 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.226 ************************************ 00:25:14.226 START TEST nvmf_fio_host 00:25:14.226 ************************************ 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:14.226 * Looking for test storage... 00:25:14.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:14.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.226 --rc genhtml_branch_coverage=1 00:25:14.226 --rc genhtml_function_coverage=1 00:25:14.226 --rc genhtml_legend=1 00:25:14.226 --rc geninfo_all_blocks=1 00:25:14.226 --rc geninfo_unexecuted_blocks=1 00:25:14.226 00:25:14.226 ' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:14.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.226 --rc genhtml_branch_coverage=1 00:25:14.226 --rc genhtml_function_coverage=1 00:25:14.226 --rc genhtml_legend=1 00:25:14.226 --rc geninfo_all_blocks=1 00:25:14.226 --rc geninfo_unexecuted_blocks=1 00:25:14.226 00:25:14.226 ' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:14.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.226 --rc genhtml_branch_coverage=1 00:25:14.226 --rc genhtml_function_coverage=1 00:25:14.226 --rc genhtml_legend=1 00:25:14.226 --rc geninfo_all_blocks=1 00:25:14.226 --rc geninfo_unexecuted_blocks=1 00:25:14.226 00:25:14.226 ' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:14.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:14.226 --rc genhtml_branch_coverage=1 00:25:14.226 --rc genhtml_function_coverage=1 00:25:14.226 --rc genhtml_legend=1 00:25:14.226 --rc geninfo_all_blocks=1 00:25:14.226 --rc geninfo_unexecuted_blocks=1 00:25:14.226 00:25:14.226 ' 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.226 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:14.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:14.227 11:48:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:22.362 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:22.362 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.362 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:22.363 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:22.363 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:22.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:25:22.363 00:25:22.363 --- 10.0.0.2 ping statistics --- 00:25:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.363 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:25:22.363 00:25:22.363 --- 10.0.0.1 ping statistics --- 00:25:22.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.363 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1180298 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1180298 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 1180298 ']' 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:22.363 11:48:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.363 [2024-11-15 11:48:46.935347] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:25:22.363 [2024-11-15 11:48:46.935417] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.363 [2024-11-15 11:48:47.037032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:22.363 [2024-11-15 11:48:47.090321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.363 [2024-11-15 11:48:47.090372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.363 [2024-11-15 11:48:47.090381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.363 [2024-11-15 11:48:47.090388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.363 [2024-11-15 11:48:47.090395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.363 [2024-11-15 11:48:47.092633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.363 [2024-11-15 11:48:47.092744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:22.363 [2024-11-15 11:48:47.092908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:22.363 [2024-11-15 11:48:47.092910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.363 11:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:22.363 11:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:25:22.363 11:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:22.624 [2024-11-15 11:48:47.931755] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.624 11:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:22.624 11:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:22.624 11:48:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.624 11:48:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:22.884 Malloc1 00:25:22.884 11:48:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:23.144 11:48:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:23.144 11:48:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.406 [2024-11-15 11:48:48.792328] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.406 11:48:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:23.666 11:48:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:23.927 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:23.927 fio-3.35 00:25:23.927 Starting 1 thread 00:25:26.467 00:25:26.467 test: (groupid=0, jobs=1): err= 0: pid=1181062: Fri Nov 15 11:48:51 2024 00:25:26.467 read: IOPS=9549, BW=37.3MiB/s (39.1MB/s)(74.8MiB/2006msec) 00:25:26.467 slat (usec): min=2, max=277, avg= 2.18, stdev= 2.86 00:25:26.467 clat (usec): min=4276, max=12762, avg=7408.31, stdev=520.53 00:25:26.467 lat (usec): min=4308, max=12764, avg=7410.49, stdev=520.42 00:25:26.467 clat percentiles (usec): 00:25:26.467 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:25:26.467 | 30.00th=[ 7177], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7504], 00:25:26.467 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8029], 95.00th=[ 8225], 00:25:26.467 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10159], 99.95th=[11469], 00:25:26.467 | 99.99th=[12649] 00:25:26.467 bw ( KiB/s): min=37080, max=38904, per=99.96%, avg=38184.00, stdev=778.29, samples=4 00:25:26.467 iops : min= 9270, max= 9726, avg=9546.00, stdev=194.57, samples=4 00:25:26.467 write: IOPS=9560, BW=37.3MiB/s (39.2MB/s)(74.9MiB/2006msec); 0 zone resets 00:25:26.467 slat (usec): min=2, max=279, avg= 2.24, stdev= 2.21 00:25:26.467 clat (usec): min=2885, max=11643, avg=5925.74, stdev=456.05 00:25:26.467 lat (usec): min=2903, max=11645, avg=5927.98, stdev=456.00 00:25:26.467 clat percentiles (usec): 00:25:26.467 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:25:26.467 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 5997], 00:25:26.467 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:25:26.467 | 99.00th=[ 6915], 99.50th=[ 7111], 99.90th=[10028], 99.95th=[11076], 00:25:26.467 | 99.99th=[11600] 00:25:26.467 bw ( KiB/s): min=37968, max=38608, per=99.98%, avg=38234.00, stdev=280.75, samples=4 00:25:26.467 iops : min= 9492, max= 9652, avg=9558.50, stdev=70.19, samples=4 00:25:26.467 lat (msec) : 4=0.05%, 10=99.85%, 20=0.11% 00:25:26.467 cpu : usr=74.86%, sys=24.64%, ctx=29, majf=0, minf=17 00:25:26.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:26.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:26.467 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:26.467 issued rwts: total=19157,19179,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:26.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:26.467 00:25:26.467 Run status group 0 (all jobs): 00:25:26.467 READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.8MiB (78.5MB), run=2006-2006msec 00:25:26.468 WRITE: bw=37.3MiB/s (39.2MB/s), 37.3MiB/s-37.3MiB/s (39.2MB/s-39.2MB/s), io=74.9MiB (78.6MB), run=2006-2006msec 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:26.468 11:48:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:27.034 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:27.034 fio-3.35 00:25:27.034 Starting 1 thread 00:25:29.574 00:25:29.574 test: (groupid=0, jobs=1): err= 0: pid=1181665: Fri Nov 15 11:48:54 2024 00:25:29.574 read: IOPS=9491, BW=148MiB/s (156MB/s)(297MiB/2003msec) 00:25:29.574 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.61 00:25:29.574 clat (usec): min=1996, max=15017, avg=8091.58, stdev=1926.28 00:25:29.574 lat (usec): min=2000, max=15020, avg=8095.18, stdev=1926.43 00:25:29.574 clat percentiles (usec): 00:25:29.574 | 1.00th=[ 4178], 5.00th=[ 5145], 10.00th=[ 5735], 20.00th=[ 6390], 00:25:29.574 | 30.00th=[ 6980], 40.00th=[ 7439], 50.00th=[ 7898], 60.00th=[ 8455], 00:25:29.574 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10814], 95.00th=[11338], 00:25:29.574 | 99.00th=[12649], 99.50th=[13042], 99.90th=[14091], 99.95th=[14615], 00:25:29.574 | 99.99th=[15008] 00:25:29.574 bw ( KiB/s): min=73408, max=78944, per=49.89%, avg=75760.00, stdev=2377.85, samples=4 00:25:29.574 iops : min= 4588, max= 4934, avg=4735.00, stdev=148.62, samples=4 00:25:29.574 write: IOPS=5391, BW=84.2MiB/s (88.3MB/s)(155MiB/1835msec); 0 zone resets 00:25:29.574 slat (usec): min=39, max=456, avg=40.93, stdev= 8.39 00:25:29.574 clat (usec): min=1931, max=15653, avg=9171.79, stdev=1395.46 00:25:29.574 lat (usec): min=1971, max=15790, avg=9212.72, stdev=1397.45 00:25:29.574 clat percentiles (usec): 00:25:29.574 | 1.00th=[ 6259], 5.00th=[ 7111], 10.00th=[ 7504], 20.00th=[ 8029], 00:25:29.574 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:25:29.574 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10945], 95.00th=[11338], 00:25:29.574 | 99.00th=[12649], 99.50th=[13566], 99.90th=[15270], 99.95th=[15401], 00:25:29.574 | 99.99th=[15664] 00:25:29.574 bw ( KiB/s): min=76832, max=81088, per=91.18%, avg=78656.00, stdev=1859.49, samples=4 00:25:29.574 iops : min= 4802, max= 5068, avg=4916.00, stdev=116.22, samples=4 00:25:29.574 lat (msec) : 2=0.01%, 4=0.64%, 10=79.11%, 20=20.25% 00:25:29.574 cpu : usr=83.47%, sys=15.33%, ctx=13, majf=0, minf=21 00:25:29.574 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:29.574 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:29.574 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:29.574 issued rwts: total=19011,9894,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:29.574 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:29.574 00:25:29.574 Run status group 0 (all jobs): 00:25:29.575 READ: bw=148MiB/s (156MB/s), 148MiB/s-148MiB/s (156MB/s-156MB/s), io=297MiB (311MB), run=2003-2003msec 00:25:29.575 WRITE: bw=84.2MiB/s (88.3MB/s), 84.2MiB/s-84.2MiB/s (88.3MB/s-88.3MB/s), io=155MiB (162MB), run=1835-1835msec 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:29.575 rmmod nvme_tcp 00:25:29.575 rmmod nvme_fabrics 00:25:29.575 rmmod nvme_keyring 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1180298 ']' 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1180298 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 1180298 ']' 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 1180298 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:29.575 11:48:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1180298 00:25:29.575 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:29.575 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:29.575 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1180298' 00:25:29.575 killing process with pid 1180298 00:25:29.575 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 1180298 00:25:29.575 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 1180298 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:29.834 11:48:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.748 11:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.748 00:25:31.748 real 0m18.035s 00:25:31.748 user 1m8.758s 00:25:31.748 sys 0m7.746s 00:25:31.748 11:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:31.748 11:48:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.748 ************************************ 00:25:31.748 END TEST nvmf_fio_host 00:25:31.748 ************************************ 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.009 ************************************ 00:25:32.009 START TEST nvmf_failover 00:25:32.009 ************************************ 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:32.009 * Looking for test storage... 00:25:32.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:32.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.009 --rc genhtml_branch_coverage=1 00:25:32.009 --rc genhtml_function_coverage=1 00:25:32.009 --rc genhtml_legend=1 00:25:32.009 --rc geninfo_all_blocks=1 00:25:32.009 --rc geninfo_unexecuted_blocks=1 00:25:32.009 00:25:32.009 ' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:32.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.009 --rc genhtml_branch_coverage=1 00:25:32.009 --rc genhtml_function_coverage=1 00:25:32.009 --rc genhtml_legend=1 00:25:32.009 --rc geninfo_all_blocks=1 00:25:32.009 --rc geninfo_unexecuted_blocks=1 00:25:32.009 00:25:32.009 ' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:32.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.009 --rc genhtml_branch_coverage=1 00:25:32.009 --rc genhtml_function_coverage=1 00:25:32.009 --rc genhtml_legend=1 00:25:32.009 --rc geninfo_all_blocks=1 00:25:32.009 --rc geninfo_unexecuted_blocks=1 00:25:32.009 00:25:32.009 ' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:32.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:32.009 --rc genhtml_branch_coverage=1 00:25:32.009 --rc genhtml_function_coverage=1 00:25:32.009 --rc genhtml_legend=1 00:25:32.009 --rc geninfo_all_blocks=1 00:25:32.009 --rc geninfo_unexecuted_blocks=1 00:25:32.009 00:25:32.009 ' 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:32.009 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:32.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:32.270 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:32.271 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:32.271 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.271 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:32.271 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:32.271 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:32.271 11:48:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:40.410 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:40.410 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:40.410 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:40.410 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:40.411 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:40.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:40.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.517 ms 00:25:40.411 00:25:40.411 --- 10.0.0.2 ping statistics --- 00:25:40.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.411 rtt min/avg/max/mdev = 0.517/0.517/0.517/0.000 ms 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:40.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:40.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:25:40.411 00:25:40.411 --- 10.0.0.1 ping statistics --- 00:25:40.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:40.411 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:40.411 11:49:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1186326 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1186326 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1186326 ']' 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:40.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:40.411 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.411 [2024-11-15 11:49:05.095935] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:25:40.411 [2024-11-15 11:49:05.096000] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:40.411 [2024-11-15 11:49:05.198880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:40.411 [2024-11-15 11:49:05.250468] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:40.411 [2024-11-15 11:49:05.250517] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:40.411 [2024-11-15 11:49:05.250528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:40.412 [2024-11-15 11:49:05.250538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:40.412 [2024-11-15 11:49:05.250546] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:40.412 [2024-11-15 11:49:05.252439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:40.412 [2024-11-15 11:49:05.252616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:40.412 [2024-11-15 11:49:05.252616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:40.672 11:49:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:40.672 [2024-11-15 11:49:06.119483] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:40.672 11:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:40.933 Malloc0 00:25:40.933 11:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:41.194 11:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:41.455 11:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:41.455 [2024-11-15 11:49:06.930286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:41.718 11:49:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:41.718 [2024-11-15 11:49:07.122901] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:41.718 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:41.980 [2024-11-15 11:49:07.315658] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1186857 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1186857 /var/tmp/bdevperf.sock 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1186857 ']' 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:41.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:41.980 11:49:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:42.921 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:42.921 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:42.921 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:43.181 NVMe0n1 00:25:43.181 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:43.441 00:25:43.702 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:43.702 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1187116 00:25:43.702 11:49:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:44.642 11:49:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.642 [2024-11-15 11:49:10.109183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.642 [2024-11-15 11:49:10.109274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.643 [2024-11-15 11:49:10.109279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.643 [2024-11-15 11:49:10.109283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.643 [2024-11-15 11:49:10.109288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.643 [2024-11-15 11:49:10.109293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ded0 is same with the state(6) to be set 00:25:44.901 11:49:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:48.192 11:49:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:48.192 00:25:48.192 11:49:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.453 [2024-11-15 11:49:13.729196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729260] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 [2024-11-15 11:49:13.729501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ecf0 is same with the state(6) to be set 00:25:48.453 11:49:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:51.752 11:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.752 [2024-11-15 11:49:16.918876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.752 11:49:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:52.692 11:49:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:52.692 [2024-11-15 11:49:18.110077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 [2024-11-15 11:49:18.110364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242fbf0 is same with the state(6) to be set 00:25:52.692 11:49:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1187116 00:25:59.280 { 00:25:59.280 "results": [ 00:25:59.280 { 00:25:59.280 "job": "NVMe0n1", 00:25:59.280 "core_mask": "0x1", 00:25:59.280 "workload": "verify", 00:25:59.280 "status": "finished", 00:25:59.280 "verify_range": { 00:25:59.280 "start": 0, 00:25:59.280 "length": 16384 00:25:59.280 }, 00:25:59.280 "queue_depth": 128, 00:25:59.280 "io_size": 4096, 00:25:59.280 "runtime": 15.002621, 00:25:59.280 "iops": 12436.560251705352, 00:25:59.280 "mibps": 48.58031348322403, 00:25:59.280 "io_failed": 8509, 00:25:59.280 "io_timeout": 0, 00:25:59.280 "avg_latency_us": 9822.075584396945, 00:25:59.280 "min_latency_us": 392.53333333333336, 00:25:59.280 "max_latency_us": 13544.106666666667 00:25:59.280 } 00:25:59.280 ], 00:25:59.280 "core_count": 1 00:25:59.280 } 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1186857 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1186857 ']' 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1186857 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1186857 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1186857' 00:25:59.280 killing process with pid 1186857 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1186857 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1186857 00:25:59.280 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:59.280 [2024-11-15 11:49:07.396858] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:25:59.280 [2024-11-15 11:49:07.396939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186857 ] 00:25:59.280 [2024-11-15 11:49:07.489701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.280 [2024-11-15 11:49:07.542860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:59.280 Running I/O for 15 seconds... 00:25:59.280 11324.00 IOPS, 44.23 MiB/s [2024-11-15T10:49:24.778Z] [2024-11-15 11:49:10.109757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:98304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.109985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:98368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.109993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.280 [2024-11-15 11:49:10.110142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.280 [2024-11-15 11:49:10.110151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.281 [2024-11-15 11:49:10.110819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.281 [2024-11-15 11:49:10.110829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.110987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.110996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.111004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.282 [2024-11-15 11:49:10.111407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.282 [2024-11-15 11:49:10.111491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.282 [2024-11-15 11:49:10.111500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:10.111507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:10.111524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:10.111540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:10.111811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.283 [2024-11-15 11:49:10.111929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.111949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.283 [2024-11-15 11:49:10.111956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.283 [2024-11-15 11:49:10.111963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98272 len:8 PRP1 0x0 PRP2 0x0 00:25:59.283 [2024-11-15 11:49:10.111970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.112013] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:59.283 [2024-11-15 11:49:10.112035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.283 [2024-11-15 11:49:10.112043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.112051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.283 [2024-11-15 11:49:10.112058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.112067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.283 [2024-11-15 11:49:10.112074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.112082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.283 [2024-11-15 11:49:10.112090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:10.112097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:59.283 [2024-11-15 11:49:10.115674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:59.283 [2024-11-15 11:49:10.115699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac8d70 (9): Bad file descriptor 00:25:59.283 [2024-11-15 11:49:10.139696] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:59.283 11108.50 IOPS, 43.39 MiB/s [2024-11-15T10:49:24.781Z] 11182.67 IOPS, 43.68 MiB/s [2024-11-15T10:49:24.781Z] 11585.25 IOPS, 45.25 MiB/s [2024-11-15T10:49:24.781Z] [2024-11-15 11:49:13.729945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:13.729975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:13.729986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:13.729992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:13.730003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.283 [2024-11-15 11:49:13.730008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.283 [2024-11-15 11:49:13.730014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:56952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:56960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:56968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:57000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.284 [2024-11-15 11:49:13.730394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.284 [2024-11-15 11:49:13.730399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:57040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:57048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:57064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:57072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:57080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:57120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:57136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:57160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:57176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:57208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:57272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:57304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:57320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:57328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.285 [2024-11-15 11:49:13.730848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:57344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.285 [2024-11-15 11:49:13.730853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:57352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.286 [2024-11-15 11:49:13.730864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.286 [2024-11-15 11:49:13.730875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:57368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.286 [2024-11-15 11:49:13.730888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:57376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.286 [2024-11-15 11:49:13.730899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.730989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.730995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:56512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:56520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:56528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:56544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.286 [2024-11-15 11:49:13.731171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:56576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:56600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:56608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:56632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.286 [2024-11-15 11:49:13.731301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:56640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.286 [2024-11-15 11:49:13.731307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:56648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:56656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:56664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:56688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:56704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:56712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:13.731429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.287 [2024-11-15 11:49:13.731450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.287 [2024-11-15 11:49:13.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56736 len:8 PRP1 0x0 PRP2 0x0 00:25:59.287 [2024-11-15 11:49:13.731462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731493] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:59.287 [2024-11-15 11:49:13.731509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-11-15 11:49:13.731514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-11-15 11:49:13.731525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-11-15 11:49:13.731536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.287 [2024-11-15 11:49:13.731546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:13.731551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:59.287 [2024-11-15 11:49:13.731576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac8d70 (9): Bad file descriptor 00:25:59.287 [2024-11-15 11:49:13.734024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:59.287 [2024-11-15 11:49:13.796455] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:59.287 11635.40 IOPS, 45.45 MiB/s [2024-11-15T10:49:24.785Z] 11853.83 IOPS, 46.30 MiB/s [2024-11-15T10:49:24.785Z] 12006.71 IOPS, 46.90 MiB/s [2024-11-15T10:49:24.785Z] 12136.00 IOPS, 47.41 MiB/s [2024-11-15T10:49:24.785Z] 12218.00 IOPS, 47.73 MiB/s [2024-11-15T10:49:24.785Z] [2024-11-15 11:49:18.110473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.287 [2024-11-15 11:49:18.110745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.287 [2024-11-15 11:49:18.110750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.110990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.110995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:5864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.288 [2024-11-15 11:49:18.111210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.288 [2024-11-15 11:49:18.111215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.289 [2024-11-15 11:49:18.111227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:59.289 [2024-11-15 11:49:18.111239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:6048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:6064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:6112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.289 [2024-11-15 11:49:18.111620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.289 [2024-11-15 11:49:18.111626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:6208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.111979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:59.290 [2024-11-15 11:49:18.111983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.112000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:59.290 [2024-11-15 11:49:18.112005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:59.290 [2024-11-15 11:49:18.112010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6424 len:8 PRP1 0x0 PRP2 0x0 00:25:59.290 [2024-11-15 11:49:18.112015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.112048] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:59.290 [2024-11-15 11:49:18.112064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.290 [2024-11-15 11:49:18.112071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.112080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.290 [2024-11-15 11:49:18.112089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.112095] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.290 [2024-11-15 11:49:18.112100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.112108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.290 [2024-11-15 11:49:18.112113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.290 [2024-11-15 11:49:18.112118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:59.290 [2024-11-15 11:49:18.114587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:59.290 [2024-11-15 11:49:18.114607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac8d70 (9): Bad file descriptor 00:25:59.290 [2024-11-15 11:49:18.180725] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:59.290 12193.20 IOPS, 47.63 MiB/s [2024-11-15T10:49:24.788Z] 12264.00 IOPS, 47.91 MiB/s [2024-11-15T10:49:24.788Z] 12311.58 IOPS, 48.09 MiB/s [2024-11-15T10:49:24.788Z] 12354.77 IOPS, 48.26 MiB/s [2024-11-15T10:49:24.789Z] 12385.93 IOPS, 48.38 MiB/s 00:25:59.291 Latency(us) 00:25:59.291 [2024-11-15T10:49:24.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.291 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.291 Verification LBA range: start 0x0 length 0x4000 00:25:59.291 NVMe0n1 : 15.00 12436.56 48.58 567.17 0.00 9822.08 392.53 13544.11 00:25:59.291 [2024-11-15T10:49:24.789Z] =================================================================================================================== 00:25:59.291 [2024-11-15T10:49:24.789Z] Total : 12436.56 48.58 567.17 0.00 9822.08 392.53 13544.11 00:25:59.291 Received shutdown signal, test time was about 15.000000 seconds 00:25:59.291 00:25:59.291 Latency(us) 00:25:59.291 [2024-11-15T10:49:24.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.291 [2024-11-15T10:49:24.789Z] =================================================================================================================== 00:25:59.291 [2024-11-15T10:49:24.789Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1190042 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1190042 /var/tmp/bdevperf.sock 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 1190042 ']' 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:59.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:59.291 11:49:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:59.861 11:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:59.861 11:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:59.861 11:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:59.861 [2024-11-15 11:49:25.275389] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:59.861 11:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:00.121 [2024-11-15 11:49:25.451840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:00.121 11:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:00.381 NVMe0n1 00:26:00.381 11:49:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:00.953 00:26:00.953 11:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:01.215 00:26:01.215 11:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:01.215 11:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:01.476 11:49:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:01.735 11:49:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:05.028 11:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.028 11:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:05.028 11:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1191317 00:26:05.028 11:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:05.028 11:49:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1191317 00:26:05.968 { 00:26:05.968 "results": [ 00:26:05.968 { 00:26:05.968 "job": "NVMe0n1", 00:26:05.968 "core_mask": "0x1", 00:26:05.968 "workload": "verify", 00:26:05.968 "status": "finished", 00:26:05.968 "verify_range": { 00:26:05.968 "start": 0, 00:26:05.968 "length": 16384 00:26:05.968 }, 00:26:05.968 "queue_depth": 128, 00:26:05.968 "io_size": 4096, 00:26:05.968 "runtime": 1.006009, 00:26:05.968 "iops": 12800.084293480475, 00:26:05.968 "mibps": 50.000329271408106, 00:26:05.968 "io_failed": 0, 00:26:05.968 "io_timeout": 0, 00:26:05.968 "avg_latency_us": 9965.399626206932, 00:26:05.968 "min_latency_us": 2020.6933333333334, 00:26:05.968 "max_latency_us": 10704.213333333333 00:26:05.968 } 00:26:05.968 ], 00:26:05.968 "core_count": 1 00:26:05.968 } 00:26:05.968 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:05.968 [2024-11-15 11:49:24.325316] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:26:05.968 [2024-11-15 11:49:24.325376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190042 ] 00:26:05.968 [2024-11-15 11:49:24.409549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.968 [2024-11-15 11:49:24.437738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.968 [2024-11-15 11:49:27.001244] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:05.968 [2024-11-15 11:49:27.001278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.968 [2024-11-15 11:49:27.001287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.968 [2024-11-15 11:49:27.001293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.968 [2024-11-15 11:49:27.001299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.969 [2024-11-15 11:49:27.001305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.969 [2024-11-15 11:49:27.001311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.969 [2024-11-15 11:49:27.001317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:05.969 [2024-11-15 11:49:27.001322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:05.969 [2024-11-15 11:49:27.001328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:05.969 [2024-11-15 11:49:27.001346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:05.969 [2024-11-15 11:49:27.001357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7dd70 (9): Bad file descriptor 00:26:05.969 [2024-11-15 11:49:27.005773] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:05.969 Running I/O for 1 seconds... 00:26:05.969 12749.00 IOPS, 49.80 MiB/s 00:26:05.969 Latency(us) 00:26:05.969 [2024-11-15T10:49:31.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:05.969 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:05.969 Verification LBA range: start 0x0 length 0x4000 00:26:05.969 NVMe0n1 : 1.01 12800.08 50.00 0.00 0.00 9965.40 2020.69 10704.21 00:26:05.969 [2024-11-15T10:49:31.467Z] =================================================================================================================== 00:26:05.969 [2024-11-15T10:49:31.467Z] Total : 12800.08 50.00 0.00 0.00 9965.40 2020.69 10704.21 00:26:05.969 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:05.969 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:06.229 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.229 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.229 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:06.490 11:49:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.750 11:49:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1190042 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1190042 ']' 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1190042 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1190042 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1190042' 00:26:10.047 killing process with pid 1190042 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1190042 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1190042 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:10.047 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:10.308 rmmod nvme_tcp 00:26:10.308 rmmod nvme_fabrics 00:26:10.308 rmmod nvme_keyring 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1186326 ']' 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1186326 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 1186326 ']' 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 1186326 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1186326 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1186326' 00:26:10.308 killing process with pid 1186326 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 1186326 00:26:10.308 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 1186326 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:10.569 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:10.570 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:10.570 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:10.570 11:49:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.113 11:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:13.113 00:26:13.113 real 0m40.698s 00:26:13.113 user 2m5.308s 00:26:13.113 sys 0m8.899s 00:26:13.113 11:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:13.113 11:49:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:13.113 ************************************ 00:26:13.113 END TEST nvmf_failover 00:26:13.113 ************************************ 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:13.113 ************************************ 00:26:13.113 START TEST nvmf_host_discovery 00:26:13.113 ************************************ 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:13.113 * Looking for test storage... 00:26:13.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:13.113 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.114 --rc genhtml_branch_coverage=1 00:26:13.114 --rc genhtml_function_coverage=1 00:26:13.114 --rc genhtml_legend=1 00:26:13.114 --rc geninfo_all_blocks=1 00:26:13.114 --rc geninfo_unexecuted_blocks=1 00:26:13.114 00:26:13.114 ' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.114 --rc genhtml_branch_coverage=1 00:26:13.114 --rc genhtml_function_coverage=1 00:26:13.114 --rc genhtml_legend=1 00:26:13.114 --rc geninfo_all_blocks=1 00:26:13.114 --rc geninfo_unexecuted_blocks=1 00:26:13.114 00:26:13.114 ' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.114 --rc genhtml_branch_coverage=1 00:26:13.114 --rc genhtml_function_coverage=1 00:26:13.114 --rc genhtml_legend=1 00:26:13.114 --rc geninfo_all_blocks=1 00:26:13.114 --rc geninfo_unexecuted_blocks=1 00:26:13.114 00:26:13.114 ' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:13.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:13.114 --rc genhtml_branch_coverage=1 00:26:13.114 --rc genhtml_function_coverage=1 00:26:13.114 --rc genhtml_legend=1 00:26:13.114 --rc geninfo_all_blocks=1 00:26:13.114 --rc geninfo_unexecuted_blocks=1 00:26:13.114 00:26:13.114 ' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:13.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:13.114 11:49:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:21.247 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:21.248 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:21.248 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:21.248 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:21.248 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:21.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.553 ms 00:26:21.248 00:26:21.248 --- 10.0.0.2 ping statistics --- 00:26:21.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.248 rtt min/avg/max/mdev = 0.553/0.553/0.553/0.000 ms 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:21.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:26:21.248 00:26:21.248 --- 10.0.0.1 ping statistics --- 00:26:21.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.248 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:21.248 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1196509 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1196509 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1196509 ']' 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.249 11:49:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.249 [2024-11-15 11:49:45.929145] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:26:21.249 [2024-11-15 11:49:45.929214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.249 [2024-11-15 11:49:46.029756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.249 [2024-11-15 11:49:46.080017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.249 [2024-11-15 11:49:46.080067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.249 [2024-11-15 11:49:46.080079] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.249 [2024-11-15 11:49:46.080089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.249 [2024-11-15 11:49:46.080098] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.249 [2024-11-15 11:49:46.080904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.510 [2024-11-15 11:49:46.813788] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.510 [2024-11-15 11:49:46.826059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.510 null0 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:21.510 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.511 null1 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1196742 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1196742 /tmp/host.sock 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 1196742 ']' 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:21.511 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:21.511 11:49:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:21.511 [2024-11-15 11:49:46.916477] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:26:21.511 [2024-11-15 11:49:46.916538] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196742 ] 00:26:21.772 [2024-11-15 11:49:47.008597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.772 [2024-11-15 11:49:47.061390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.344 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:22.605 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.606 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.606 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.606 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.606 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.606 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.606 11:49:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.606 [2024-11-15 11:49:48.089361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.606 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:22.928 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:26:22.929 11:49:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:23.537 [2024-11-15 11:49:48.804774] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.537 [2024-11-15 11:49:48.804812] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.537 [2024-11-15 11:49:48.804828] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.537 [2024-11-15 11:49:48.893094] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:23.866 [2024-11-15 11:49:49.073440] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:23.866 [2024-11-15 11:49:49.074677] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d807a0:1 started. 00:26:23.866 [2024-11-15 11:49:49.076549] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:23.866 [2024-11-15 11:49:49.076590] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:23.866 [2024-11-15 11:49:49.082049] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d807a0 was disconnected and freed. delete nvme_qpair. 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:24.129 [2024-11-15 11:49:49.548449] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1d4f0a0:1 started. 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.129 [2024-11-15 11:49:49.553286] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1d4f0a0 was disconnected and freed. delete nvme_qpair. 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:24.129 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.130 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.393 [2024-11-15 11:49:49.649473] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:24.393 [2024-11-15 11:49:49.649874] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:24.393 [2024-11-15 11:49:49.649905] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.393 [2024-11-15 11:49:49.779290] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:24.393 11:49:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:24.654 [2024-11-15 11:49:50.044100] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:24.654 [2024-11-15 11:49:50.044168] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:24.654 [2024-11-15 11:49:50.044179] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:24.654 [2024-11-15 11:49:50.044186] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.596 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.597 [2024-11-15 11:49:50.925217] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:25.597 [2024-11-15 11:49:50.925241] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:25.597 [2024-11-15 11:49:50.927966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.597 [2024-11-15 11:49:50.927983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.597 [2024-11-15 11:49:50.927990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.597 [2024-11-15 11:49:50.927996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.597 [2024-11-15 11:49:50.928002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.597 [2024-11-15 11:49:50.928008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.597 [2024-11-15 11:49:50.928014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:25.597 [2024-11-15 11:49:50.928019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.597 [2024-11-15 11:49:50.928024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.597 [2024-11-15 11:49:50.937981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.597 [2024-11-15 11:49:50.948016] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.597 [2024-11-15 11:49:50.948026] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.597 [2024-11-15 11:49:50.948030] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.597 [2024-11-15 11:49:50.948033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.597 [2024-11-15 11:49:50.948048] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.597 [2024-11-15 11:49:50.948368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.597 [2024-11-15 11:49:50.948378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.597 [2024-11-15 11:49:50.948384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.597 [2024-11-15 11:49:50.948393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.597 [2024-11-15 11:49:50.948405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.597 [2024-11-15 11:49:50.948411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.597 [2024-11-15 11:49:50.948417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.597 [2024-11-15 11:49:50.948422] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.597 [2024-11-15 11:49:50.948426] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.597 [2024-11-15 11:49:50.948429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.597 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.597 [2024-11-15 11:49:50.958076] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.597 [2024-11-15 11:49:50.958084] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.597 [2024-11-15 11:49:50.958087] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.597 [2024-11-15 11:49:50.958090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.597 [2024-11-15 11:49:50.958100] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.597 [2024-11-15 11:49:50.958449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.597 [2024-11-15 11:49:50.958458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.597 [2024-11-15 11:49:50.958464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.597 [2024-11-15 11:49:50.958471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.597 [2024-11-15 11:49:50.958483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.597 [2024-11-15 11:49:50.958488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.597 [2024-11-15 11:49:50.958493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.597 [2024-11-15 11:49:50.958498] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.597 [2024-11-15 11:49:50.958501] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.597 [2024-11-15 11:49:50.958504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.597 [2024-11-15 11:49:50.968130] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.597 [2024-11-15 11:49:50.968139] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.597 [2024-11-15 11:49:50.968142] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.597 [2024-11-15 11:49:50.968146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.597 [2024-11-15 11:49:50.968156] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.597 [2024-11-15 11:49:50.968485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.597 [2024-11-15 11:49:50.968494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.597 [2024-11-15 11:49:50.968499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.597 [2024-11-15 11:49:50.968507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.597 [2024-11-15 11:49:50.968524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.597 [2024-11-15 11:49:50.968530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.597 [2024-11-15 11:49:50.968535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.598 [2024-11-15 11:49:50.968539] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.598 [2024-11-15 11:49:50.968543] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.598 [2024-11-15 11:49:50.968546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.598 [2024-11-15 11:49:50.978184] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.598 [2024-11-15 11:49:50.978193] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.598 [2024-11-15 11:49:50.978196] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:50.978200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.598 [2024-11-15 11:49:50.978213] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:50.978515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.598 [2024-11-15 11:49:50.978525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.598 [2024-11-15 11:49:50.978531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.598 [2024-11-15 11:49:50.978539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.598 [2024-11-15 11:49:50.978551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.598 [2024-11-15 11:49:50.978556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.598 [2024-11-15 11:49:50.978565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.598 [2024-11-15 11:49:50.978570] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.598 [2024-11-15 11:49:50.978573] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.598 [2024-11-15 11:49:50.978577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:25.598 [2024-11-15 11:49:50.988242] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.598 [2024-11-15 11:49:50.988251] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.598 [2024-11-15 11:49:50.988254] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:50.988258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.598 [2024-11-15 11:49:50.988267] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.598 [2024-11-15 11:49:50.988565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.598 [2024-11-15 11:49:50.988575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.598 [2024-11-15 11:49:50.988581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.598 [2024-11-15 11:49:50.988588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.598 [2024-11-15 11:49:50.988600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.598 [2024-11-15 11:49:50.988605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.598 [2024-11-15 11:49:50.988616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.598 [2024-11-15 11:49:50.988620] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.598 [2024-11-15 11:49:50.988623] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.598 [2024-11-15 11:49:50.988627] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.598 11:49:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.598 [2024-11-15 11:49:50.998296] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.598 [2024-11-15 11:49:50.998310] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.598 [2024-11-15 11:49:50.998314] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:50.998321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.598 [2024-11-15 11:49:50.998336] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:50.998797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.598 [2024-11-15 11:49:50.998828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.598 [2024-11-15 11:49:50.998838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.598 [2024-11-15 11:49:50.998853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.598 [2024-11-15 11:49:50.998873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.598 [2024-11-15 11:49:50.998879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.598 [2024-11-15 11:49:50.998885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.598 [2024-11-15 11:49:50.998890] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.598 [2024-11-15 11:49:50.998894] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.598 [2024-11-15 11:49:50.998898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.598 [2024-11-15 11:49:51.008367] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:25.598 [2024-11-15 11:49:51.008378] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:25.598 [2024-11-15 11:49:51.008381] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:51.008385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:25.598 [2024-11-15 11:49:51.008397] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:25.598 [2024-11-15 11:49:51.008796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:25.598 [2024-11-15 11:49:51.008828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d50e10 with addr=10.0.0.2, port=4420 00:26:25.598 [2024-11-15 11:49:51.008840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d50e10 is same with the state(6) to be set 00:26:25.598 [2024-11-15 11:49:51.008854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d50e10 (9): Bad file descriptor 00:26:25.598 [2024-11-15 11:49:51.008863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:25.598 [2024-11-15 11:49:51.008868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:25.598 [2024-11-15 11:49:51.008874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:25.598 [2024-11-15 11:49:51.008879] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:25.598 [2024-11-15 11:49:51.008883] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:25.598 [2024-11-15 11:49:51.008886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:25.598 [2024-11-15 11:49:51.011924] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:25.598 [2024-11-15 11:49:51.011939] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.598 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.599 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.859 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:25.860 11:49:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.243 [2024-11-15 11:49:52.311938] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:27.243 [2024-11-15 11:49:52.311954] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:27.243 [2024-11-15 11:49:52.311963] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:27.243 [2024-11-15 11:49:52.439337] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:27.243 [2024-11-15 11:49:52.504129] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:27.243 [2024-11-15 11:49:52.504680] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1eb89a0:1 started. 00:26:27.243 [2024-11-15 11:49:52.506075] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:27.243 [2024-11-15 11:49:52.506098] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:27.243 [2024-11-15 11:49:52.509861] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1eb89a0 was disconnected and freed. delete nvme_qpair. 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.243 request: 00:26:27.243 { 00:26:27.243 "name": "nvme", 00:26:27.243 "trtype": "tcp", 00:26:27.243 "traddr": "10.0.0.2", 00:26:27.243 "adrfam": "ipv4", 00:26:27.243 "trsvcid": "8009", 00:26:27.243 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.243 "wait_for_attach": true, 00:26:27.243 "method": "bdev_nvme_start_discovery", 00:26:27.243 "req_id": 1 00:26:27.243 } 00:26:27.243 Got JSON-RPC error response 00:26:27.243 response: 00:26:27.243 { 00:26:27.243 "code": -17, 00:26:27.243 "message": "File exists" 00:26:27.243 } 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:27.243 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 request: 00:26:27.244 { 00:26:27.244 "name": "nvme_second", 00:26:27.244 "trtype": "tcp", 00:26:27.244 "traddr": "10.0.0.2", 00:26:27.244 "adrfam": "ipv4", 00:26:27.244 "trsvcid": "8009", 00:26:27.244 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:27.244 "wait_for_attach": true, 00:26:27.244 "method": "bdev_nvme_start_discovery", 00:26:27.244 "req_id": 1 00:26:27.244 } 00:26:27.244 Got JSON-RPC error response 00:26:27.244 response: 00:26:27.244 { 00:26:27.244 "code": -17, 00:26:27.244 "message": "File exists" 00:26:27.244 } 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.244 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.503 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:27.503 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:27.503 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.503 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:27.504 11:49:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.443 [2024-11-15 11:49:53.769491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:28.443 [2024-11-15 11:49:53.769514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d52220 with addr=10.0.0.2, port=8010 00:26:28.443 [2024-11-15 11:49:53.769525] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:28.443 [2024-11-15 11:49:53.769531] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:28.443 [2024-11-15 11:49:53.769536] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:29.382 [2024-11-15 11:49:54.771973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.382 [2024-11-15 11:49:54.772004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d52220 with addr=10.0.0.2, port=8010 00:26:29.382 [2024-11-15 11:49:54.772017] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:29.382 [2024-11-15 11:49:54.772022] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:29.382 [2024-11-15 11:49:54.772028] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:30.327 [2024-11-15 11:49:55.773874] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:30.327 request: 00:26:30.327 { 00:26:30.327 "name": "nvme_second", 00:26:30.327 "trtype": "tcp", 00:26:30.327 "traddr": "10.0.0.2", 00:26:30.327 "adrfam": "ipv4", 00:26:30.327 "trsvcid": "8010", 00:26:30.327 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:30.327 "wait_for_attach": false, 00:26:30.327 "attach_timeout_ms": 3000, 00:26:30.327 "method": "bdev_nvme_start_discovery", 00:26:30.327 "req_id": 1 00:26:30.327 } 00:26:30.327 Got JSON-RPC error response 00:26:30.327 response: 00:26:30.327 { 00:26:30.327 "code": -110, 00:26:30.327 "message": "Connection timed out" 00:26:30.327 } 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:30.327 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1196742 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:30.588 rmmod nvme_tcp 00:26:30.588 rmmod nvme_fabrics 00:26:30.588 rmmod nvme_keyring 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1196509 ']' 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1196509 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 1196509 ']' 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 1196509 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1196509 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1196509' 00:26:30.588 killing process with pid 1196509 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 1196509 00:26:30.588 11:49:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 1196509 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:30.588 11:49:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:33.134 00:26:33.134 real 0m20.076s 00:26:33.134 user 0m23.133s 00:26:33.134 sys 0m7.143s 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.134 ************************************ 00:26:33.134 END TEST nvmf_host_discovery 00:26:33.134 ************************************ 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.134 ************************************ 00:26:33.134 START TEST nvmf_host_multipath_status 00:26:33.134 ************************************ 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:33.134 * Looking for test storage... 00:26:33.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:33.134 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.135 --rc genhtml_branch_coverage=1 00:26:33.135 --rc genhtml_function_coverage=1 00:26:33.135 --rc genhtml_legend=1 00:26:33.135 --rc geninfo_all_blocks=1 00:26:33.135 --rc geninfo_unexecuted_blocks=1 00:26:33.135 00:26:33.135 ' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.135 --rc genhtml_branch_coverage=1 00:26:33.135 --rc genhtml_function_coverage=1 00:26:33.135 --rc genhtml_legend=1 00:26:33.135 --rc geninfo_all_blocks=1 00:26:33.135 --rc geninfo_unexecuted_blocks=1 00:26:33.135 00:26:33.135 ' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.135 --rc genhtml_branch_coverage=1 00:26:33.135 --rc genhtml_function_coverage=1 00:26:33.135 --rc genhtml_legend=1 00:26:33.135 --rc geninfo_all_blocks=1 00:26:33.135 --rc geninfo_unexecuted_blocks=1 00:26:33.135 00:26:33.135 ' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:33.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.135 --rc genhtml_branch_coverage=1 00:26:33.135 --rc genhtml_function_coverage=1 00:26:33.135 --rc genhtml_legend=1 00:26:33.135 --rc geninfo_all_blocks=1 00:26:33.135 --rc geninfo_unexecuted_blocks=1 00:26:33.135 00:26:33.135 ' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:33.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:33.135 11:49:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:41.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:41.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:41.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:41.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.281 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.282 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.282 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.738 ms 00:26:41.282 00:26:41.282 --- 10.0.0.2 ping statistics --- 00:26:41.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.282 rtt min/avg/max/mdev = 0.738/0.738/0.738/0.000 ms 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.282 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.282 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:26:41.282 00:26:41.282 --- 10.0.0.1 ping statistics --- 00:26:41.282 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.282 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1202895 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1202895 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1202895 ']' 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:41.282 11:50:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 [2024-11-15 11:50:05.870594] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:26:41.282 [2024-11-15 11:50:05.870658] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.282 [2024-11-15 11:50:05.971614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:41.282 [2024-11-15 11:50:06.023460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.282 [2024-11-15 11:50:06.023512] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.282 [2024-11-15 11:50:06.023521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.282 [2024-11-15 11:50:06.023529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.282 [2024-11-15 11:50:06.023535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.282 [2024-11-15 11:50:06.025229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.282 [2024-11-15 11:50:06.025233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1202895 00:26:41.282 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:41.543 [2024-11-15 11:50:06.909594] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.543 11:50:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:41.804 Malloc0 00:26:41.804 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:42.066 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:42.327 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:42.327 [2024-11-15 11:50:07.736644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:42.327 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:42.586 [2024-11-15 11:50:07.933124] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1203290 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1203290 /var/tmp/bdevperf.sock 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 1203290 ']' 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:42.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:42.586 11:50:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:43.527 11:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:43.527 11:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:43.527 11:50:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:43.787 11:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:44.047 Nvme0n1 00:26:44.047 11:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:44.619 Nvme0n1 00:26:44.619 11:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:44.619 11:50:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:46.529 11:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:46.529 11:50:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:46.791 11:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:47.051 11:50:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:47.993 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:47.993 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:47.994 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:47.994 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.254 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:48.515 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.515 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:48.515 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:48.515 11:50:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.775 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:48.775 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:48.775 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:48.775 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:49.036 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:49.297 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:49.558 11:50:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:50.500 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:50.501 11:50:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:50.761 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:50.761 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:50.761 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:50.761 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.021 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.021 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:51.021 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.021 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:51.282 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.545 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.545 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:51.545 11:50:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:51.806 11:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:51.806 11:50:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.192 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:53.453 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.453 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:53.453 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.453 11:50:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:53.714 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.714 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:53.714 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.714 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:53.975 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.975 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:53.975 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:53.975 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.975 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.976 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:53.976 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:54.236 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:54.497 11:50:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:55.441 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:55.441 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:55.441 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.441 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.702 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.702 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:55.702 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.702 11:50:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.702 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.702 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.702 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.702 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.963 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.963 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.963 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.963 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.224 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.485 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:56.485 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:56.485 11:50:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:56.746 11:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:56.746 11:50:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.129 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.389 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.389 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.389 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.389 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.648 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.648 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:58.648 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.648 11:50:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:58.909 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:59.170 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:59.429 11:50:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:00.373 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:00.373 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:00.373 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.373 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.634 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:00.634 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:00.634 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.634 11:50:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:00.634 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.634 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:00.634 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.634 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.895 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.895 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.895 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.895 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:01.156 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.156 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:01.156 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.156 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:01.417 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.417 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:01.417 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.417 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:01.417 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.417 11:50:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:01.678 11:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:01.678 11:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:01.939 11:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:01.939 11:50:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.324 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.585 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.585 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.585 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.585 11:50:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.846 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:04.107 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.107 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:04.107 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:04.368 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:04.368 11:50:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:05.756 11:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:05.756 11:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:05.756 11:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.756 11:50:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.756 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.017 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.017 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.017 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.017 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:06.278 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.278 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:06.278 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.278 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:06.538 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.539 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:06.539 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.539 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.539 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.539 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:06.539 11:50:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:06.799 11:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:07.060 11:50:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:08.003 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:08.003 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:08.003 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.003 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.004 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.004 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.265 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.265 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.265 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.265 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.265 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.265 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.526 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.526 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.526 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.526 11:50:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.788 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.049 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.049 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:09.049 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:09.311 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:09.573 11:50:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:10.516 11:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:10.516 11:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:10.516 11:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.516 11:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:10.516 11:50:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.516 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:10.516 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.516 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:10.777 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:10.777 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:10.777 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.777 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.038 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.038 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.039 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.039 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.300 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1203290 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1203290 ']' 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1203290 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1203290 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1203290' 00:27:11.561 killing process with pid 1203290 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1203290 00:27:11.561 11:50:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1203290 00:27:11.561 { 00:27:11.561 "results": [ 00:27:11.561 { 00:27:11.561 "job": "Nvme0n1", 00:27:11.561 "core_mask": "0x4", 00:27:11.561 "workload": "verify", 00:27:11.561 "status": "terminated", 00:27:11.561 "verify_range": { 00:27:11.561 "start": 0, 00:27:11.561 "length": 16384 00:27:11.561 }, 00:27:11.561 "queue_depth": 128, 00:27:11.561 "io_size": 4096, 00:27:11.561 "runtime": 26.858542, 00:27:11.561 "iops": 12001.693911754406, 00:27:11.561 "mibps": 46.88161684279065, 00:27:11.561 "io_failed": 0, 00:27:11.561 "io_timeout": 0, 00:27:11.561 "avg_latency_us": 10645.923790189485, 00:27:11.561 "min_latency_us": 607.5733333333334, 00:27:11.561 "max_latency_us": 3075822.933333333 00:27:11.561 } 00:27:11.561 ], 00:27:11.561 "core_count": 1 00:27:11.561 } 00:27:11.847 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1203290 00:27:11.847 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:11.847 [2024-11-15 11:50:08.016889] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:27:11.847 [2024-11-15 11:50:08.016991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203290 ] 00:27:11.847 [2024-11-15 11:50:08.110254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.847 [2024-11-15 11:50:08.160794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.847 Running I/O for 90 seconds... 00:27:11.847 11385.00 IOPS, 44.47 MiB/s [2024-11-15T10:50:37.345Z] 11388.50 IOPS, 44.49 MiB/s [2024-11-15T10:50:37.345Z] 11388.67 IOPS, 44.49 MiB/s [2024-11-15T10:50:37.345Z] 11667.50 IOPS, 45.58 MiB/s [2024-11-15T10:50:37.345Z] 11915.00 IOPS, 46.54 MiB/s [2024-11-15T10:50:37.345Z] 12069.33 IOPS, 47.15 MiB/s [2024-11-15T10:50:37.345Z] 12160.57 IOPS, 47.50 MiB/s [2024-11-15T10:50:37.345Z] 12247.25 IOPS, 47.84 MiB/s [2024-11-15T10:50:37.345Z] 12335.56 IOPS, 48.19 MiB/s [2024-11-15T10:50:37.345Z] 12387.20 IOPS, 48.39 MiB/s [2024-11-15T10:50:37.345Z] 12429.36 IOPS, 48.55 MiB/s [2024-11-15T10:50:37.345Z] [2024-11-15 11:50:22.040471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.847 [2024-11-15 11:50:22.040623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.847 [2024-11-15 11:50:22.040628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.040987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.040997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.848 [2024-11-15 11:50:22.041351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.848 [2024-11-15 11:50:22.041362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.041747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.041762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.041773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.041778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.849 [2024-11-15 11:50:22.042389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.849 [2024-11-15 11:50:22.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.849 [2024-11-15 11:50:22.042485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.042993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.042998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.043014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.043029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.043044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.043061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.043077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.850 [2024-11-15 11:50:22.043092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.850 [2024-11-15 11:50:22.043102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.851 [2024-11-15 11:50:22.043108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.851 [2024-11-15 11:50:22.043123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.851 [2024-11-15 11:50:22.043138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.851 [2024-11-15 11:50:22.043153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.851 [2024-11-15 11:50:22.043169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.851 [2024-11-15 11:50:22.043184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.043985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.043995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.851 [2024-11-15 11:50:22.044166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.851 [2024-11-15 11:50:22.044171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.852 [2024-11-15 11:50:22.044636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.852 [2024-11-15 11:50:22.044652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.045116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.852 [2024-11-15 11:50:22.045130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.045148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.852 [2024-11-15 11:50:22.045154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.045167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.852 [2024-11-15 11:50:22.045172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.852 [2024-11-15 11:50:22.045183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.853 [2024-11-15 11:50:22.045360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.853 [2024-11-15 11:50:22.045778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.853 [2024-11-15 11:50:22.045789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.045991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.045996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.854 [2024-11-15 11:50:22.046534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.854 [2024-11-15 11:50:22.046785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.854 [2024-11-15 11:50:22.046796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.046919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.046924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.051684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.855 [2024-11-15 11:50:22.051690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.855 [2024-11-15 11:50:22.052198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.856 [2024-11-15 11:50:22.052574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.856 [2024-11-15 11:50:22.052761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.856 [2024-11-15 11:50:22.052771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.052989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.052994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.857 [2024-11-15 11:50:22.053809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.857 [2024-11-15 11:50:22.053824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.857 [2024-11-15 11:50:22.053834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.053990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.053996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.858 [2024-11-15 11:50:22.054429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.858 [2024-11-15 11:50:22.054439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.054642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.054647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.859 [2024-11-15 11:50:22.055481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.859 [2024-11-15 11:50:22.055492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.859 [2024-11-15 11:50:22.055498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.055900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.055906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.860 [2024-11-15 11:50:22.056449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.860 [2024-11-15 11:50:22.056459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.861 [2024-11-15 11:50:22.056624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.056788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.056793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.861 [2024-11-15 11:50:22.061276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.861 [2024-11-15 11:50:22.061286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.061509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.061514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.862 [2024-11-15 11:50:22.062375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.862 [2024-11-15 11:50:22.062390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.862 [2024-11-15 11:50:22.062400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.863 [2024-11-15 11:50:22.062626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.863 [2024-11-15 11:50:22.062990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.863 [2024-11-15 11:50:22.062996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.864 [2024-11-15 11:50:22.063886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.063989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.063994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.064004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.064010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.064020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.064025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.064035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.064040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.064051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.064056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.064066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.064071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.864 [2024-11-15 11:50:22.064081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.864 [2024-11-15 11:50:22.064086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.064992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.064997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.065007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.065015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.065025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.065031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.065041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.065046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.065056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.065061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.865 [2024-11-15 11:50:22.065072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.865 [2024-11-15 11:50:22.065077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.866 [2024-11-15 11:50:22.065554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.866 [2024-11-15 11:50:22.065677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.866 [2024-11-15 11:50:22.065682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.065938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.065943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.867 [2024-11-15 11:50:22.066755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.867 [2024-11-15 11:50:22.066760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.066770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.868 [2024-11-15 11:50:22.066776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.066786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.868 [2024-11-15 11:50:22.066791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.066801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.868 [2024-11-15 11:50:22.066806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.066817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.066822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.868 [2024-11-15 11:50:22.071661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.868 [2024-11-15 11:50:22.071666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.071677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.071683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.869 [2024-11-15 11:50:22.072619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.869 [2024-11-15 11:50:22.072749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.869 [2024-11-15 11:50:22.072760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.870 [2024-11-15 11:50:22.072882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.072989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.072995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.870 [2024-11-15 11:50:22.073897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.870 [2024-11-15 11:50:22.073909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.073915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.073926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.073934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.073945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.073950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.073961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.073966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.073982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.073994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.073999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.871 [2024-11-15 11:50:22.074215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.871 [2024-11-15 11:50:22.074545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.871 [2024-11-15 11:50:22.074556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.074806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.074811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.872 [2024-11-15 11:50:22.075698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.872 [2024-11-15 11:50:22.075709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.873 [2024-11-15 11:50:22.075714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.873 [2024-11-15 11:50:22.075730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.873 [2024-11-15 11:50:22.075746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.075986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.075991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.873 [2024-11-15 11:50:22.076008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.873 [2024-11-15 11:50:22.076336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.873 [2024-11-15 11:50:22.076347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.076985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.076996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.874 [2024-11-15 11:50:22.077297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.874 [2024-11-15 11:50:22.077308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-11-15 11:50:22.077313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.077324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.875 [2024-11-15 11:50:22.077329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.077340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.077345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.081984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.081991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.875 [2024-11-15 11:50:22.082764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.875 [2024-11-15 11:50:22.082781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.082984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.082995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.876 [2024-11-15 11:50:22.083209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.876 [2024-11-15 11:50:22.083386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.876 [2024-11-15 11:50:22.083398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.877 [2024-11-15 11:50:22.083491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.083874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.083880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.877 [2024-11-15 11:50:22.084599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.877 [2024-11-15 11:50:22.084610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.878 [2024-11-15 11:50:22.084911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.084928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.084945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.084963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.084980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.084991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.084997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.878 [2024-11-15 11:50:22.085206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.878 [2024-11-15 11:50:22.085217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.085568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.085575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.879 [2024-11-15 11:50:22.086353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.879 [2024-11-15 11:50:22.086358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.880 [2024-11-15 11:50:22.086835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.880 [2024-11-15 11:50:22.086978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.880 [2024-11-15 11:50:22.086990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.086996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.087988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.087993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.881 [2024-11-15 11:50:22.088161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.881 [2024-11-15 11:50:22.088167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.882 [2024-11-15 11:50:22.088184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.882 [2024-11-15 11:50:22.088201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.882 [2024-11-15 11:50:22.088218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.088540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.088546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.092986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.092998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.093016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.093034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.093052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.093070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.093088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.882 [2024-11-15 11:50:22.093106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.882 [2024-11-15 11:50:22.093112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.093984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.093990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.883 [2024-11-15 11:50:22.094188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.883 [2024-11-15 11:50:22.094206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.883 [2024-11-15 11:50:22.094225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.883 [2024-11-15 11:50:22.094243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.883 [2024-11-15 11:50:22.094261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.883 [2024-11-15 11:50:22.094279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:11.883 [2024-11-15 11:50:22.094290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.883 [2024-11-15 11:50:22.094296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.884 [2024-11-15 11:50:22.094476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.094836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.094842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.096374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.096407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.096436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.096457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.096481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.884 [2024-11-15 11:50:22.096503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.884 [2024-11-15 11:50:22.096518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.096980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.096986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.097008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.885 [2024-11-15 11:50:22.097029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:11.885 [2024-11-15 11:50:22.097344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.885 [2024-11-15 11:50:22.097350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:22.097797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:11.886 12418.42 IOPS, 48.51 MiB/s [2024-11-15T10:50:37.384Z] [2024-11-15 11:50:22.097950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:22.097958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:11.886 11463.15 IOPS, 44.78 MiB/s [2024-11-15T10:50:37.384Z] 10644.36 IOPS, 41.58 MiB/s [2024-11-15T10:50:37.384Z] 9934.73 IOPS, 38.81 MiB/s [2024-11-15T10:50:37.384Z] 10116.00 IOPS, 39.52 MiB/s [2024-11-15T10:50:37.384Z] 10277.59 IOPS, 40.15 MiB/s [2024-11-15T10:50:37.384Z] 10620.56 IOPS, 41.49 MiB/s [2024-11-15T10:50:37.384Z] 10986.37 IOPS, 42.92 MiB/s [2024-11-15T10:50:37.384Z] 11198.55 IOPS, 43.74 MiB/s [2024-11-15T10:50:37.384Z] 11270.48 IOPS, 44.03 MiB/s [2024-11-15T10:50:37.384Z] 11340.91 IOPS, 44.30 MiB/s [2024-11-15T10:50:37.384Z] 11539.70 IOPS, 45.08 MiB/s [2024-11-15T10:50:37.384Z] 11768.38 IOPS, 45.97 MiB/s [2024-11-15T10:50:37.384Z] [2024-11-15 11:50:34.780223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:122664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:34.780381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:122808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:34.780403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:34.780419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:34.780435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.886 [2024-11-15 11:50:34.780451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:11.886 [2024-11-15 11:50:34.780461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:122688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.886 [2024-11-15 11:50:34.780466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.780476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.887 [2024-11-15 11:50:34.780481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.780492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.887 [2024-11-15 11:50:34.780497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.781131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.781143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.781154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:122880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.781160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.781170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.781175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.781186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:122912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.781191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.781201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:122928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.781206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.781220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:122944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.781225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:122960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:122976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:123008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:123024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:123040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:123056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:123072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:123088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:123104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:123120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:123136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:123152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:123168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:123184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:123200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:123224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:123240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:123256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:123272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:123288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:123336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:123368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:123384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:11.887 [2024-11-15 11:50:34.783487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:123400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:11.887 [2024-11-15 11:50:34.783492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:11.887 11930.64 IOPS, 46.60 MiB/s [2024-11-15T10:50:37.385Z] 11968.19 IOPS, 46.75 MiB/s [2024-11-15T10:50:37.385Z] Received shutdown signal, test time was about 26.859154 seconds 00:27:11.887 00:27:11.887 Latency(us) 00:27:11.887 [2024-11-15T10:50:37.385Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.887 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:11.887 Verification LBA range: start 0x0 length 0x4000 00:27:11.887 Nvme0n1 : 26.86 12001.69 46.88 0.00 0.00 10645.92 607.57 3075822.93 00:27:11.887 [2024-11-15T10:50:37.385Z] =================================================================================================================== 00:27:11.887 [2024-11-15T10:50:37.385Z] Total : 12001.69 46.88 0.00 0.00 10645.92 607.57 3075822.93 00:27:11.887 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.887 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.888 rmmod nvme_tcp 00:27:11.888 rmmod nvme_fabrics 00:27:11.888 rmmod nvme_keyring 00:27:11.888 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1202895 ']' 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1202895 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 1202895 ']' 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 1202895 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1202895 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1202895' 00:27:12.149 killing process with pid 1202895 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 1202895 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 1202895 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:12.149 11:50:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:14.694 00:27:14.694 real 0m41.360s 00:27:14.694 user 1m47.619s 00:27:14.694 sys 0m11.367s 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:14.694 ************************************ 00:27:14.694 END TEST nvmf_host_multipath_status 00:27:14.694 ************************************ 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.694 ************************************ 00:27:14.694 START TEST nvmf_discovery_remove_ifc 00:27:14.694 ************************************ 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:14.694 * Looking for test storage... 00:27:14.694 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:14.694 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:14.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.695 --rc genhtml_branch_coverage=1 00:27:14.695 --rc genhtml_function_coverage=1 00:27:14.695 --rc genhtml_legend=1 00:27:14.695 --rc geninfo_all_blocks=1 00:27:14.695 --rc geninfo_unexecuted_blocks=1 00:27:14.695 00:27:14.695 ' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:14.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.695 --rc genhtml_branch_coverage=1 00:27:14.695 --rc genhtml_function_coverage=1 00:27:14.695 --rc genhtml_legend=1 00:27:14.695 --rc geninfo_all_blocks=1 00:27:14.695 --rc geninfo_unexecuted_blocks=1 00:27:14.695 00:27:14.695 ' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:14.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.695 --rc genhtml_branch_coverage=1 00:27:14.695 --rc genhtml_function_coverage=1 00:27:14.695 --rc genhtml_legend=1 00:27:14.695 --rc geninfo_all_blocks=1 00:27:14.695 --rc geninfo_unexecuted_blocks=1 00:27:14.695 00:27:14.695 ' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:14.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:14.695 --rc genhtml_branch_coverage=1 00:27:14.695 --rc genhtml_function_coverage=1 00:27:14.695 --rc genhtml_legend=1 00:27:14.695 --rc geninfo_all_blocks=1 00:27:14.695 --rc geninfo_unexecuted_blocks=1 00:27:14.695 00:27:14.695 ' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:14.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:14.695 11:50:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:22.838 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:22.838 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:22.838 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:22.838 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:22.838 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:22.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.528 ms 00:27:22.839 00:27:22.839 --- 10.0.0.2 ping statistics --- 00:27:22.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.839 rtt min/avg/max/mdev = 0.528/0.528/0.528/0.000 ms 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:22.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:27:22.839 00:27:22.839 --- 10.0.0.1 ping statistics --- 00:27:22.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.839 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1213196 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1213196 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1213196 ']' 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:22.839 11:50:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.839 [2024-11-15 11:50:47.480652] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:27:22.839 [2024-11-15 11:50:47.480736] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.839 [2024-11-15 11:50:47.581203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.839 [2024-11-15 11:50:47.630946] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.839 [2024-11-15 11:50:47.631001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.839 [2024-11-15 11:50:47.631013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.839 [2024-11-15 11:50:47.631023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.839 [2024-11-15 11:50:47.631032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.839 [2024-11-15 11:50:47.631859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.839 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:22.839 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:22.839 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.839 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.839 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.099 [2024-11-15 11:50:48.353299] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.099 [2024-11-15 11:50:48.361610] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:23.099 null0 00:27:23.099 [2024-11-15 11:50:48.393498] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1213527 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1213527 /tmp/host.sock 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 1213527 ']' 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:23.099 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:23.099 11:50:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.099 [2024-11-15 11:50:48.472323] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:27:23.099 [2024-11-15 11:50:48.472387] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213527 ] 00:27:23.099 [2024-11-15 11:50:48.566176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.360 [2024-11-15 11:50:48.619195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.932 11:50:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.314 [2024-11-15 11:50:50.442494] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:25.314 [2024-11-15 11:50:50.442516] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:25.314 [2024-11-15 11:50:50.442529] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:25.314 [2024-11-15 11:50:50.569970] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:25.314 [2024-11-15 11:50:50.791214] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:25.314 [2024-11-15 11:50:50.792191] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x671410:1 started. 00:27:25.314 [2024-11-15 11:50:50.793752] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:25.314 [2024-11-15 11:50:50.793794] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:25.314 [2024-11-15 11:50:50.793817] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:25.314 [2024-11-15 11:50:50.793830] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:25.314 [2024-11-15 11:50:50.793851] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:25.314 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.314 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:25.314 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.315 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.315 [2024-11-15 11:50:50.801604] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x671410 was disconnected and freed. delete nvme_qpair. 00:27:25.315 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.315 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.315 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.315 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.315 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:25.574 11:50:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.574 11:50:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:25.574 11:50:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:26.953 11:50:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:27.892 11:50:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:28.830 11:50:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:29.771 11:50:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.152 [2024-11-15 11:50:56.234424] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:31.152 [2024-11-15 11:50:56.234459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.152 [2024-11-15 11:50:56.234467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.152 [2024-11-15 11:50:56.234474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.152 [2024-11-15 11:50:56.234480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.152 [2024-11-15 11:50:56.234486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.152 [2024-11-15 11:50:56.234491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.152 [2024-11-15 11:50:56.234497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.152 [2024-11-15 11:50:56.234502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.152 [2024-11-15 11:50:56.234508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:31.152 [2024-11-15 11:50:56.234513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:31.152 [2024-11-15 11:50:56.234519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64dc00 is same with the state(6) to be set 00:27:31.152 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.152 [2024-11-15 11:50:56.244446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64dc00 (9): Bad file descriptor 00:27:31.152 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.153 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.153 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.153 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.153 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.153 11:50:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.153 [2024-11-15 11:50:56.254481] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:31.153 [2024-11-15 11:50:56.254491] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:31.153 [2024-11-15 11:50:56.254495] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:31.153 [2024-11-15 11:50:56.254502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:31.153 [2024-11-15 11:50:56.254518] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:32.095 [2024-11-15 11:50:57.271652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:32.095 [2024-11-15 11:50:57.271744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x64dc00 with addr=10.0.0.2, port=4420 00:27:32.095 [2024-11-15 11:50:57.271776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x64dc00 is same with the state(6) to be set 00:27:32.095 [2024-11-15 11:50:57.271831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64dc00 (9): Bad file descriptor 00:27:32.095 [2024-11-15 11:50:57.271970] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:32.095 [2024-11-15 11:50:57.272031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:32.095 [2024-11-15 11:50:57.272055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:32.095 [2024-11-15 11:50:57.272078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:32.095 [2024-11-15 11:50:57.272099] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:32.095 [2024-11-15 11:50:57.272115] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:32.095 [2024-11-15 11:50:57.272129] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:32.095 [2024-11-15 11:50:57.272152] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:32.095 [2024-11-15 11:50:57.272167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:32.095 11:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.095 11:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.095 11:50:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.035 [2024-11-15 11:50:58.274574] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:33.035 [2024-11-15 11:50:58.274590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:33.035 [2024-11-15 11:50:58.274599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:33.035 [2024-11-15 11:50:58.274604] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:33.035 [2024-11-15 11:50:58.274610] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:33.035 [2024-11-15 11:50:58.274615] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:33.035 [2024-11-15 11:50:58.274618] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:33.035 [2024-11-15 11:50:58.274622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:33.035 [2024-11-15 11:50:58.274638] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:33.035 [2024-11-15 11:50:58.274654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.035 [2024-11-15 11:50:58.274662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.035 [2024-11-15 11:50:58.274673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.035 [2024-11-15 11:50:58.274678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.035 [2024-11-15 11:50:58.274684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.035 [2024-11-15 11:50:58.274689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.035 [2024-11-15 11:50:58.274695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.035 [2024-11-15 11:50:58.274700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.035 [2024-11-15 11:50:58.274706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:33.035 [2024-11-15 11:50:58.274711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:33.035 [2024-11-15 11:50:58.274716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:33.035 [2024-11-15 11:50:58.274946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x63d340 (9): Bad file descriptor 00:27:33.035 [2024-11-15 11:50:58.275955] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:33.035 [2024-11-15 11:50:58.275964] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:33.035 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:33.036 11:50:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:34.418 11:50:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.994 [2024-11-15 11:51:00.328730] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:34.994 [2024-11-15 11:51:00.328749] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:34.994 [2024-11-15 11:51:00.328760] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:34.994 [2024-11-15 11:51:00.417011] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.266 [2024-11-15 11:51:00.599040] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:35.266 [2024-11-15 11:51:00.599822] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x642260:1 started. 00:27:35.266 [2024-11-15 11:51:00.600728] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:35.266 [2024-11-15 11:51:00.600756] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:35.266 [2024-11-15 11:51:00.600772] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:35.266 [2024-11-15 11:51:00.600784] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:35.266 [2024-11-15 11:51:00.600790] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:35.266 11:51:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.266 [2024-11-15 11:51:00.605452] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x642260 was disconnected and freed. delete nvme_qpair. 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1213527 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1213527 ']' 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1213527 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1213527 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1213527' 00:27:36.257 killing process with pid 1213527 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1213527 00:27:36.257 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1213527 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.556 rmmod nvme_tcp 00:27:36.556 rmmod nvme_fabrics 00:27:36.556 rmmod nvme_keyring 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1213196 ']' 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1213196 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 1213196 ']' 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 1213196 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1213196 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1213196' 00:27:36.556 killing process with pid 1213196 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 1213196 00:27:36.556 11:51:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 1213196 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.817 11:51:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.725 00:27:38.725 real 0m24.476s 00:27:38.725 user 0m29.592s 00:27:38.725 sys 0m7.162s 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.725 ************************************ 00:27:38.725 END TEST nvmf_discovery_remove_ifc 00:27:38.725 ************************************ 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:38.725 11:51:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.986 ************************************ 00:27:38.986 START TEST nvmf_identify_kernel_target 00:27:38.986 ************************************ 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:38.986 * Looking for test storage... 00:27:38.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:38.986 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.987 --rc genhtml_branch_coverage=1 00:27:38.987 --rc genhtml_function_coverage=1 00:27:38.987 --rc genhtml_legend=1 00:27:38.987 --rc geninfo_all_blocks=1 00:27:38.987 --rc geninfo_unexecuted_blocks=1 00:27:38.987 00:27:38.987 ' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.987 --rc genhtml_branch_coverage=1 00:27:38.987 --rc genhtml_function_coverage=1 00:27:38.987 --rc genhtml_legend=1 00:27:38.987 --rc geninfo_all_blocks=1 00:27:38.987 --rc geninfo_unexecuted_blocks=1 00:27:38.987 00:27:38.987 ' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.987 --rc genhtml_branch_coverage=1 00:27:38.987 --rc genhtml_function_coverage=1 00:27:38.987 --rc genhtml_legend=1 00:27:38.987 --rc geninfo_all_blocks=1 00:27:38.987 --rc geninfo_unexecuted_blocks=1 00:27:38.987 00:27:38.987 ' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:38.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:38.987 --rc genhtml_branch_coverage=1 00:27:38.987 --rc genhtml_function_coverage=1 00:27:38.987 --rc genhtml_legend=1 00:27:38.987 --rc geninfo_all_blocks=1 00:27:38.987 --rc geninfo_unexecuted_blocks=1 00:27:38.987 00:27:38.987 ' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:38.987 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:38.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.988 11:51:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:47.127 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:47.128 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:47.128 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:47.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:47.128 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:47.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:47.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:27:47.128 00:27:47.128 --- 10.0.0.2 ping statistics --- 00:27:47.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.128 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:47.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:47.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:27:47.128 00:27:47.128 --- 10.0.0.1 ping statistics --- 00:27:47.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:47.128 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:47.128 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:47.129 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:47.129 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:47.129 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:47.129 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:47.129 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:47.129 11:51:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:47.129 11:51:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:50.426 Waiting for block devices as requested 00:27:50.426 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:50.426 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:50.426 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:50.426 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:50.426 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:50.686 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:50.686 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:50.686 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:50.947 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:50.947 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:51.208 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:51.208 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:51.208 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:51.468 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:51.468 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:51.468 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:51.730 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:51.992 No valid GPT data, bailing 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:51.992 00:27:51.992 Discovery Log Number of Records 2, Generation counter 2 00:27:51.992 =====Discovery Log Entry 0====== 00:27:51.992 trtype: tcp 00:27:51.992 adrfam: ipv4 00:27:51.992 subtype: current discovery subsystem 00:27:51.992 treq: not specified, sq flow control disable supported 00:27:51.992 portid: 1 00:27:51.992 trsvcid: 4420 00:27:51.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:51.992 traddr: 10.0.0.1 00:27:51.992 eflags: none 00:27:51.992 sectype: none 00:27:51.992 =====Discovery Log Entry 1====== 00:27:51.992 trtype: tcp 00:27:51.992 adrfam: ipv4 00:27:51.992 subtype: nvme subsystem 00:27:51.992 treq: not specified, sq flow control disable supported 00:27:51.992 portid: 1 00:27:51.992 trsvcid: 4420 00:27:51.992 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:51.992 traddr: 10.0.0.1 00:27:51.992 eflags: none 00:27:51.992 sectype: none 00:27:51.992 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:51.992 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:52.255 ===================================================== 00:27:52.255 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:52.255 ===================================================== 00:27:52.255 Controller Capabilities/Features 00:27:52.255 ================================ 00:27:52.255 Vendor ID: 0000 00:27:52.255 Subsystem Vendor ID: 0000 00:27:52.255 Serial Number: 770de2119ddb12ddf5bb 00:27:52.255 Model Number: Linux 00:27:52.255 Firmware Version: 6.8.9-20 00:27:52.255 Recommended Arb Burst: 0 00:27:52.255 IEEE OUI Identifier: 00 00 00 00:27:52.255 Multi-path I/O 00:27:52.255 May have multiple subsystem ports: No 00:27:52.255 May have multiple controllers: No 00:27:52.255 Associated with SR-IOV VF: No 00:27:52.255 Max Data Transfer Size: Unlimited 00:27:52.255 Max Number of Namespaces: 0 00:27:52.255 Max Number of I/O Queues: 1024 00:27:52.255 NVMe Specification Version (VS): 1.3 00:27:52.255 NVMe Specification Version (Identify): 1.3 00:27:52.255 Maximum Queue Entries: 1024 00:27:52.255 Contiguous Queues Required: No 00:27:52.255 Arbitration Mechanisms Supported 00:27:52.255 Weighted Round Robin: Not Supported 00:27:52.255 Vendor Specific: Not Supported 00:27:52.255 Reset Timeout: 7500 ms 00:27:52.255 Doorbell Stride: 4 bytes 00:27:52.255 NVM Subsystem Reset: Not Supported 00:27:52.255 Command Sets Supported 00:27:52.255 NVM Command Set: Supported 00:27:52.255 Boot Partition: Not Supported 00:27:52.255 Memory Page Size Minimum: 4096 bytes 00:27:52.255 Memory Page Size Maximum: 4096 bytes 00:27:52.255 Persistent Memory Region: Not Supported 00:27:52.255 Optional Asynchronous Events Supported 00:27:52.255 Namespace Attribute Notices: Not Supported 00:27:52.255 Firmware Activation Notices: Not Supported 00:27:52.255 ANA Change Notices: Not Supported 00:27:52.255 PLE Aggregate Log Change Notices: Not Supported 00:27:52.255 LBA Status Info Alert Notices: Not Supported 00:27:52.255 EGE Aggregate Log Change Notices: Not Supported 00:27:52.255 Normal NVM Subsystem Shutdown event: Not Supported 00:27:52.255 Zone Descriptor Change Notices: Not Supported 00:27:52.255 Discovery Log Change Notices: Supported 00:27:52.255 Controller Attributes 00:27:52.255 128-bit Host Identifier: Not Supported 00:27:52.255 Non-Operational Permissive Mode: Not Supported 00:27:52.255 NVM Sets: Not Supported 00:27:52.255 Read Recovery Levels: Not Supported 00:27:52.255 Endurance Groups: Not Supported 00:27:52.255 Predictable Latency Mode: Not Supported 00:27:52.255 Traffic Based Keep ALive: Not Supported 00:27:52.255 Namespace Granularity: Not Supported 00:27:52.255 SQ Associations: Not Supported 00:27:52.255 UUID List: Not Supported 00:27:52.255 Multi-Domain Subsystem: Not Supported 00:27:52.255 Fixed Capacity Management: Not Supported 00:27:52.255 Variable Capacity Management: Not Supported 00:27:52.255 Delete Endurance Group: Not Supported 00:27:52.255 Delete NVM Set: Not Supported 00:27:52.255 Extended LBA Formats Supported: Not Supported 00:27:52.255 Flexible Data Placement Supported: Not Supported 00:27:52.255 00:27:52.255 Controller Memory Buffer Support 00:27:52.255 ================================ 00:27:52.255 Supported: No 00:27:52.255 00:27:52.255 Persistent Memory Region Support 00:27:52.255 ================================ 00:27:52.255 Supported: No 00:27:52.255 00:27:52.255 Admin Command Set Attributes 00:27:52.255 ============================ 00:27:52.255 Security Send/Receive: Not Supported 00:27:52.255 Format NVM: Not Supported 00:27:52.255 Firmware Activate/Download: Not Supported 00:27:52.255 Namespace Management: Not Supported 00:27:52.255 Device Self-Test: Not Supported 00:27:52.255 Directives: Not Supported 00:27:52.255 NVMe-MI: Not Supported 00:27:52.255 Virtualization Management: Not Supported 00:27:52.255 Doorbell Buffer Config: Not Supported 00:27:52.255 Get LBA Status Capability: Not Supported 00:27:52.255 Command & Feature Lockdown Capability: Not Supported 00:27:52.255 Abort Command Limit: 1 00:27:52.255 Async Event Request Limit: 1 00:27:52.255 Number of Firmware Slots: N/A 00:27:52.255 Firmware Slot 1 Read-Only: N/A 00:27:52.255 Firmware Activation Without Reset: N/A 00:27:52.255 Multiple Update Detection Support: N/A 00:27:52.255 Firmware Update Granularity: No Information Provided 00:27:52.255 Per-Namespace SMART Log: No 00:27:52.255 Asymmetric Namespace Access Log Page: Not Supported 00:27:52.255 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:52.255 Command Effects Log Page: Not Supported 00:27:52.255 Get Log Page Extended Data: Supported 00:27:52.255 Telemetry Log Pages: Not Supported 00:27:52.255 Persistent Event Log Pages: Not Supported 00:27:52.255 Supported Log Pages Log Page: May Support 00:27:52.255 Commands Supported & Effects Log Page: Not Supported 00:27:52.256 Feature Identifiers & Effects Log Page:May Support 00:27:52.256 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.256 Data Area 4 for Telemetry Log: Not Supported 00:27:52.256 Error Log Page Entries Supported: 1 00:27:52.256 Keep Alive: Not Supported 00:27:52.256 00:27:52.256 NVM Command Set Attributes 00:27:52.256 ========================== 00:27:52.256 Submission Queue Entry Size 00:27:52.256 Max: 1 00:27:52.256 Min: 1 00:27:52.256 Completion Queue Entry Size 00:27:52.256 Max: 1 00:27:52.256 Min: 1 00:27:52.256 Number of Namespaces: 0 00:27:52.256 Compare Command: Not Supported 00:27:52.256 Write Uncorrectable Command: Not Supported 00:27:52.256 Dataset Management Command: Not Supported 00:27:52.256 Write Zeroes Command: Not Supported 00:27:52.256 Set Features Save Field: Not Supported 00:27:52.256 Reservations: Not Supported 00:27:52.256 Timestamp: Not Supported 00:27:52.256 Copy: Not Supported 00:27:52.256 Volatile Write Cache: Not Present 00:27:52.256 Atomic Write Unit (Normal): 1 00:27:52.256 Atomic Write Unit (PFail): 1 00:27:52.256 Atomic Compare & Write Unit: 1 00:27:52.256 Fused Compare & Write: Not Supported 00:27:52.256 Scatter-Gather List 00:27:52.256 SGL Command Set: Supported 00:27:52.256 SGL Keyed: Not Supported 00:27:52.256 SGL Bit Bucket Descriptor: Not Supported 00:27:52.256 SGL Metadata Pointer: Not Supported 00:27:52.256 Oversized SGL: Not Supported 00:27:52.256 SGL Metadata Address: Not Supported 00:27:52.256 SGL Offset: Supported 00:27:52.256 Transport SGL Data Block: Not Supported 00:27:52.256 Replay Protected Memory Block: Not Supported 00:27:52.256 00:27:52.256 Firmware Slot Information 00:27:52.256 ========================= 00:27:52.256 Active slot: 0 00:27:52.256 00:27:52.256 00:27:52.256 Error Log 00:27:52.256 ========= 00:27:52.256 00:27:52.256 Active Namespaces 00:27:52.256 ================= 00:27:52.256 Discovery Log Page 00:27:52.256 ================== 00:27:52.256 Generation Counter: 2 00:27:52.256 Number of Records: 2 00:27:52.256 Record Format: 0 00:27:52.256 00:27:52.256 Discovery Log Entry 0 00:27:52.256 ---------------------- 00:27:52.256 Transport Type: 3 (TCP) 00:27:52.256 Address Family: 1 (IPv4) 00:27:52.256 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:52.256 Entry Flags: 00:27:52.256 Duplicate Returned Information: 0 00:27:52.256 Explicit Persistent Connection Support for Discovery: 0 00:27:52.256 Transport Requirements: 00:27:52.256 Secure Channel: Not Specified 00:27:52.256 Port ID: 1 (0x0001) 00:27:52.256 Controller ID: 65535 (0xffff) 00:27:52.256 Admin Max SQ Size: 32 00:27:52.256 Transport Service Identifier: 4420 00:27:52.256 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:52.256 Transport Address: 10.0.0.1 00:27:52.256 Discovery Log Entry 1 00:27:52.256 ---------------------- 00:27:52.256 Transport Type: 3 (TCP) 00:27:52.256 Address Family: 1 (IPv4) 00:27:52.256 Subsystem Type: 2 (NVM Subsystem) 00:27:52.256 Entry Flags: 00:27:52.256 Duplicate Returned Information: 0 00:27:52.256 Explicit Persistent Connection Support for Discovery: 0 00:27:52.256 Transport Requirements: 00:27:52.256 Secure Channel: Not Specified 00:27:52.256 Port ID: 1 (0x0001) 00:27:52.256 Controller ID: 65535 (0xffff) 00:27:52.256 Admin Max SQ Size: 32 00:27:52.256 Transport Service Identifier: 4420 00:27:52.256 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:52.256 Transport Address: 10.0.0.1 00:27:52.256 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:52.256 get_feature(0x01) failed 00:27:52.256 get_feature(0x02) failed 00:27:52.256 get_feature(0x04) failed 00:27:52.256 ===================================================== 00:27:52.256 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:52.256 ===================================================== 00:27:52.256 Controller Capabilities/Features 00:27:52.256 ================================ 00:27:52.256 Vendor ID: 0000 00:27:52.256 Subsystem Vendor ID: 0000 00:27:52.256 Serial Number: 84495c0b9466ad478bda 00:27:52.256 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:52.256 Firmware Version: 6.8.9-20 00:27:52.256 Recommended Arb Burst: 6 00:27:52.256 IEEE OUI Identifier: 00 00 00 00:27:52.256 Multi-path I/O 00:27:52.256 May have multiple subsystem ports: Yes 00:27:52.256 May have multiple controllers: Yes 00:27:52.256 Associated with SR-IOV VF: No 00:27:52.256 Max Data Transfer Size: Unlimited 00:27:52.256 Max Number of Namespaces: 1024 00:27:52.256 Max Number of I/O Queues: 128 00:27:52.256 NVMe Specification Version (VS): 1.3 00:27:52.256 NVMe Specification Version (Identify): 1.3 00:27:52.256 Maximum Queue Entries: 1024 00:27:52.256 Contiguous Queues Required: No 00:27:52.256 Arbitration Mechanisms Supported 00:27:52.256 Weighted Round Robin: Not Supported 00:27:52.256 Vendor Specific: Not Supported 00:27:52.256 Reset Timeout: 7500 ms 00:27:52.256 Doorbell Stride: 4 bytes 00:27:52.256 NVM Subsystem Reset: Not Supported 00:27:52.256 Command Sets Supported 00:27:52.256 NVM Command Set: Supported 00:27:52.256 Boot Partition: Not Supported 00:27:52.256 Memory Page Size Minimum: 4096 bytes 00:27:52.256 Memory Page Size Maximum: 4096 bytes 00:27:52.256 Persistent Memory Region: Not Supported 00:27:52.256 Optional Asynchronous Events Supported 00:27:52.256 Namespace Attribute Notices: Supported 00:27:52.256 Firmware Activation Notices: Not Supported 00:27:52.256 ANA Change Notices: Supported 00:27:52.256 PLE Aggregate Log Change Notices: Not Supported 00:27:52.256 LBA Status Info Alert Notices: Not Supported 00:27:52.256 EGE Aggregate Log Change Notices: Not Supported 00:27:52.256 Normal NVM Subsystem Shutdown event: Not Supported 00:27:52.256 Zone Descriptor Change Notices: Not Supported 00:27:52.256 Discovery Log Change Notices: Not Supported 00:27:52.256 Controller Attributes 00:27:52.256 128-bit Host Identifier: Supported 00:27:52.256 Non-Operational Permissive Mode: Not Supported 00:27:52.256 NVM Sets: Not Supported 00:27:52.256 Read Recovery Levels: Not Supported 00:27:52.256 Endurance Groups: Not Supported 00:27:52.256 Predictable Latency Mode: Not Supported 00:27:52.256 Traffic Based Keep ALive: Supported 00:27:52.256 Namespace Granularity: Not Supported 00:27:52.256 SQ Associations: Not Supported 00:27:52.256 UUID List: Not Supported 00:27:52.256 Multi-Domain Subsystem: Not Supported 00:27:52.256 Fixed Capacity Management: Not Supported 00:27:52.256 Variable Capacity Management: Not Supported 00:27:52.256 Delete Endurance Group: Not Supported 00:27:52.256 Delete NVM Set: Not Supported 00:27:52.256 Extended LBA Formats Supported: Not Supported 00:27:52.256 Flexible Data Placement Supported: Not Supported 00:27:52.256 00:27:52.256 Controller Memory Buffer Support 00:27:52.256 ================================ 00:27:52.256 Supported: No 00:27:52.256 00:27:52.256 Persistent Memory Region Support 00:27:52.256 ================================ 00:27:52.256 Supported: No 00:27:52.256 00:27:52.256 Admin Command Set Attributes 00:27:52.256 ============================ 00:27:52.256 Security Send/Receive: Not Supported 00:27:52.256 Format NVM: Not Supported 00:27:52.256 Firmware Activate/Download: Not Supported 00:27:52.256 Namespace Management: Not Supported 00:27:52.256 Device Self-Test: Not Supported 00:27:52.256 Directives: Not Supported 00:27:52.256 NVMe-MI: Not Supported 00:27:52.256 Virtualization Management: Not Supported 00:27:52.256 Doorbell Buffer Config: Not Supported 00:27:52.256 Get LBA Status Capability: Not Supported 00:27:52.256 Command & Feature Lockdown Capability: Not Supported 00:27:52.256 Abort Command Limit: 4 00:27:52.256 Async Event Request Limit: 4 00:27:52.256 Number of Firmware Slots: N/A 00:27:52.256 Firmware Slot 1 Read-Only: N/A 00:27:52.256 Firmware Activation Without Reset: N/A 00:27:52.256 Multiple Update Detection Support: N/A 00:27:52.256 Firmware Update Granularity: No Information Provided 00:27:52.256 Per-Namespace SMART Log: Yes 00:27:52.256 Asymmetric Namespace Access Log Page: Supported 00:27:52.256 ANA Transition Time : 10 sec 00:27:52.256 00:27:52.256 Asymmetric Namespace Access Capabilities 00:27:52.256 ANA Optimized State : Supported 00:27:52.256 ANA Non-Optimized State : Supported 00:27:52.256 ANA Inaccessible State : Supported 00:27:52.256 ANA Persistent Loss State : Supported 00:27:52.256 ANA Change State : Supported 00:27:52.256 ANAGRPID is not changed : No 00:27:52.256 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:52.256 00:27:52.256 ANA Group Identifier Maximum : 128 00:27:52.256 Number of ANA Group Identifiers : 128 00:27:52.256 Max Number of Allowed Namespaces : 1024 00:27:52.257 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:52.257 Command Effects Log Page: Supported 00:27:52.257 Get Log Page Extended Data: Supported 00:27:52.257 Telemetry Log Pages: Not Supported 00:27:52.257 Persistent Event Log Pages: Not Supported 00:27:52.257 Supported Log Pages Log Page: May Support 00:27:52.257 Commands Supported & Effects Log Page: Not Supported 00:27:52.257 Feature Identifiers & Effects Log Page:May Support 00:27:52.257 NVMe-MI Commands & Effects Log Page: May Support 00:27:52.257 Data Area 4 for Telemetry Log: Not Supported 00:27:52.257 Error Log Page Entries Supported: 128 00:27:52.257 Keep Alive: Supported 00:27:52.257 Keep Alive Granularity: 1000 ms 00:27:52.257 00:27:52.257 NVM Command Set Attributes 00:27:52.257 ========================== 00:27:52.257 Submission Queue Entry Size 00:27:52.257 Max: 64 00:27:52.257 Min: 64 00:27:52.257 Completion Queue Entry Size 00:27:52.257 Max: 16 00:27:52.257 Min: 16 00:27:52.257 Number of Namespaces: 1024 00:27:52.257 Compare Command: Not Supported 00:27:52.257 Write Uncorrectable Command: Not Supported 00:27:52.257 Dataset Management Command: Supported 00:27:52.257 Write Zeroes Command: Supported 00:27:52.257 Set Features Save Field: Not Supported 00:27:52.257 Reservations: Not Supported 00:27:52.257 Timestamp: Not Supported 00:27:52.257 Copy: Not Supported 00:27:52.257 Volatile Write Cache: Present 00:27:52.257 Atomic Write Unit (Normal): 1 00:27:52.257 Atomic Write Unit (PFail): 1 00:27:52.257 Atomic Compare & Write Unit: 1 00:27:52.257 Fused Compare & Write: Not Supported 00:27:52.257 Scatter-Gather List 00:27:52.257 SGL Command Set: Supported 00:27:52.257 SGL Keyed: Not Supported 00:27:52.257 SGL Bit Bucket Descriptor: Not Supported 00:27:52.257 SGL Metadata Pointer: Not Supported 00:27:52.257 Oversized SGL: Not Supported 00:27:52.257 SGL Metadata Address: Not Supported 00:27:52.257 SGL Offset: Supported 00:27:52.257 Transport SGL Data Block: Not Supported 00:27:52.257 Replay Protected Memory Block: Not Supported 00:27:52.257 00:27:52.257 Firmware Slot Information 00:27:52.257 ========================= 00:27:52.257 Active slot: 0 00:27:52.257 00:27:52.257 Asymmetric Namespace Access 00:27:52.257 =========================== 00:27:52.257 Change Count : 0 00:27:52.257 Number of ANA Group Descriptors : 1 00:27:52.257 ANA Group Descriptor : 0 00:27:52.257 ANA Group ID : 1 00:27:52.257 Number of NSID Values : 1 00:27:52.257 Change Count : 0 00:27:52.257 ANA State : 1 00:27:52.257 Namespace Identifier : 1 00:27:52.257 00:27:52.257 Commands Supported and Effects 00:27:52.257 ============================== 00:27:52.257 Admin Commands 00:27:52.257 -------------- 00:27:52.257 Get Log Page (02h): Supported 00:27:52.257 Identify (06h): Supported 00:27:52.257 Abort (08h): Supported 00:27:52.257 Set Features (09h): Supported 00:27:52.257 Get Features (0Ah): Supported 00:27:52.257 Asynchronous Event Request (0Ch): Supported 00:27:52.257 Keep Alive (18h): Supported 00:27:52.257 I/O Commands 00:27:52.257 ------------ 00:27:52.257 Flush (00h): Supported 00:27:52.257 Write (01h): Supported LBA-Change 00:27:52.257 Read (02h): Supported 00:27:52.257 Write Zeroes (08h): Supported LBA-Change 00:27:52.257 Dataset Management (09h): Supported 00:27:52.257 00:27:52.257 Error Log 00:27:52.257 ========= 00:27:52.257 Entry: 0 00:27:52.257 Error Count: 0x3 00:27:52.257 Submission Queue Id: 0x0 00:27:52.257 Command Id: 0x5 00:27:52.257 Phase Bit: 0 00:27:52.257 Status Code: 0x2 00:27:52.257 Status Code Type: 0x0 00:27:52.257 Do Not Retry: 1 00:27:52.257 Error Location: 0x28 00:27:52.257 LBA: 0x0 00:27:52.257 Namespace: 0x0 00:27:52.257 Vendor Log Page: 0x0 00:27:52.257 ----------- 00:27:52.257 Entry: 1 00:27:52.257 Error Count: 0x2 00:27:52.257 Submission Queue Id: 0x0 00:27:52.257 Command Id: 0x5 00:27:52.257 Phase Bit: 0 00:27:52.257 Status Code: 0x2 00:27:52.257 Status Code Type: 0x0 00:27:52.257 Do Not Retry: 1 00:27:52.257 Error Location: 0x28 00:27:52.257 LBA: 0x0 00:27:52.257 Namespace: 0x0 00:27:52.257 Vendor Log Page: 0x0 00:27:52.257 ----------- 00:27:52.257 Entry: 2 00:27:52.257 Error Count: 0x1 00:27:52.257 Submission Queue Id: 0x0 00:27:52.257 Command Id: 0x4 00:27:52.257 Phase Bit: 0 00:27:52.257 Status Code: 0x2 00:27:52.257 Status Code Type: 0x0 00:27:52.257 Do Not Retry: 1 00:27:52.257 Error Location: 0x28 00:27:52.257 LBA: 0x0 00:27:52.257 Namespace: 0x0 00:27:52.257 Vendor Log Page: 0x0 00:27:52.257 00:27:52.257 Number of Queues 00:27:52.257 ================ 00:27:52.257 Number of I/O Submission Queues: 128 00:27:52.257 Number of I/O Completion Queues: 128 00:27:52.257 00:27:52.257 ZNS Specific Controller Data 00:27:52.257 ============================ 00:27:52.257 Zone Append Size Limit: 0 00:27:52.257 00:27:52.257 00:27:52.257 Active Namespaces 00:27:52.257 ================= 00:27:52.257 get_feature(0x05) failed 00:27:52.257 Namespace ID:1 00:27:52.257 Command Set Identifier: NVM (00h) 00:27:52.257 Deallocate: Supported 00:27:52.257 Deallocated/Unwritten Error: Not Supported 00:27:52.257 Deallocated Read Value: Unknown 00:27:52.257 Deallocate in Write Zeroes: Not Supported 00:27:52.257 Deallocated Guard Field: 0xFFFF 00:27:52.257 Flush: Supported 00:27:52.257 Reservation: Not Supported 00:27:52.257 Namespace Sharing Capabilities: Multiple Controllers 00:27:52.257 Size (in LBAs): 3750748848 (1788GiB) 00:27:52.257 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:52.257 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:52.257 UUID: 57f96898-db15-44f3-9190-0a2ea001d7d8 00:27:52.257 Thin Provisioning: Not Supported 00:27:52.257 Per-NS Atomic Units: Yes 00:27:52.257 Atomic Write Unit (Normal): 8 00:27:52.257 Atomic Write Unit (PFail): 8 00:27:52.257 Preferred Write Granularity: 8 00:27:52.257 Atomic Compare & Write Unit: 8 00:27:52.257 Atomic Boundary Size (Normal): 0 00:27:52.257 Atomic Boundary Size (PFail): 0 00:27:52.257 Atomic Boundary Offset: 0 00:27:52.257 NGUID/EUI64 Never Reused: No 00:27:52.257 ANA group ID: 1 00:27:52.257 Namespace Write Protected: No 00:27:52.257 Number of LBA Formats: 1 00:27:52.257 Current LBA Format: LBA Format #00 00:27:52.257 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:52.257 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:52.257 rmmod nvme_tcp 00:27:52.257 rmmod nvme_fabrics 00:27:52.257 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:52.518 11:51:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:54.432 11:51:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:58.638 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:58.638 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:58.639 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:58.639 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:58.639 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:58.639 00:27:58.639 real 0m19.772s 00:27:58.639 user 0m5.443s 00:27:58.639 sys 0m11.323s 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:58.639 ************************************ 00:27:58.639 END TEST nvmf_identify_kernel_target 00:27:58.639 ************************************ 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.639 ************************************ 00:27:58.639 START TEST nvmf_auth_host 00:27:58.639 ************************************ 00:27:58.639 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:58.901 * Looking for test storage... 00:27:58.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:58.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.901 --rc genhtml_branch_coverage=1 00:27:58.901 --rc genhtml_function_coverage=1 00:27:58.901 --rc genhtml_legend=1 00:27:58.901 --rc geninfo_all_blocks=1 00:27:58.901 --rc geninfo_unexecuted_blocks=1 00:27:58.901 00:27:58.901 ' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:58.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.901 --rc genhtml_branch_coverage=1 00:27:58.901 --rc genhtml_function_coverage=1 00:27:58.901 --rc genhtml_legend=1 00:27:58.901 --rc geninfo_all_blocks=1 00:27:58.901 --rc geninfo_unexecuted_blocks=1 00:27:58.901 00:27:58.901 ' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:58.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.901 --rc genhtml_branch_coverage=1 00:27:58.901 --rc genhtml_function_coverage=1 00:27:58.901 --rc genhtml_legend=1 00:27:58.901 --rc geninfo_all_blocks=1 00:27:58.901 --rc geninfo_unexecuted_blocks=1 00:27:58.901 00:27:58.901 ' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:58.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:58.901 --rc genhtml_branch_coverage=1 00:27:58.901 --rc genhtml_function_coverage=1 00:27:58.901 --rc genhtml_legend=1 00:27:58.901 --rc geninfo_all_blocks=1 00:27:58.901 --rc geninfo_unexecuted_blocks=1 00:27:58.901 00:27:58.901 ' 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:58.901 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:58.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:58.902 11:51:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:07.052 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:07.052 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:07.052 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:07.052 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:07.053 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:07.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:07.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:28:07.053 00:28:07.053 --- 10.0.0.2 ping statistics --- 00:28:07.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.053 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:07.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:07.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:28:07.053 00:28:07.053 --- 10.0.0.1 ping statistics --- 00:28:07.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:07.053 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1228626 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1228626 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1228626 ']' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:07.053 11:51:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=74184cb7e45fd004490dba242d032a0c 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.17H 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 74184cb7e45fd004490dba242d032a0c 0 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 74184cb7e45fd004490dba242d032a0c 0 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=74184cb7e45fd004490dba242d032a0c 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.315 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.17H 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.17H 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.17H 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=fa312d74a3529a16522a95e9ec8469019fe17543a62114426c619b49877b99c1 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xF8 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key fa312d74a3529a16522a95e9ec8469019fe17543a62114426c619b49877b99c1 3 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 fa312d74a3529a16522a95e9ec8469019fe17543a62114426c619b49877b99c1 3 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=fa312d74a3529a16522a95e9ec8469019fe17543a62114426c619b49877b99c1 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xF8 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xF8 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.xF8 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f0c03dc25c9c110509d3d088c9a9852e866e053f1d1f3811 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.HcU 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f0c03dc25c9c110509d3d088c9a9852e866e053f1d1f3811 0 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f0c03dc25c9c110509d3d088c9a9852e866e053f1d1f3811 0 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f0c03dc25c9c110509d3d088c9a9852e866e053f1d1f3811 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.HcU 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.HcU 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.HcU 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=9acf504bf78e61fa3913e505b884afe4d7994afc6298dca9 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.cC7 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 9acf504bf78e61fa3913e505b884afe4d7994afc6298dca9 2 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 9acf504bf78e61fa3913e505b884afe4d7994afc6298dca9 2 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=9acf504bf78e61fa3913e505b884afe4d7994afc6298dca9 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:07.578 11:51:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.cC7 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.cC7 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.cC7 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4ddeb8da24169604a4bf3dbe1bdeb129 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.0gc 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4ddeb8da24169604a4bf3dbe1bdeb129 1 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4ddeb8da24169604a4bf3dbe1bdeb129 1 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4ddeb8da24169604a4bf3dbe1bdeb129 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:07.578 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.0gc 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.0gc 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0gc 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8512458544f74827b405475e75ef2b1d 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1Gq 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8512458544f74827b405475e75ef2b1d 1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8512458544f74827b405475e75ef2b1d 1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8512458544f74827b405475e75ef2b1d 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1Gq 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1Gq 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.1Gq 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90ef1a18cb4dbd118d7137782bf67f9dd87b7f3786947a56 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1IR 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90ef1a18cb4dbd118d7137782bf67f9dd87b7f3786947a56 2 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90ef1a18cb4dbd118d7137782bf67f9dd87b7f3786947a56 2 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90ef1a18cb4dbd118d7137782bf67f9dd87b7f3786947a56 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1IR 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1IR 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.1IR 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=38dd1839f1c8879a3fa4efb48cef7cf0 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.IaY 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 38dd1839f1c8879a3fa4efb48cef7cf0 0 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 38dd1839f1c8879a3fa4efb48cef7cf0 0 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=38dd1839f1c8879a3fa4efb48cef7cf0 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.IaY 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.IaY 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.IaY 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a937cc0dd0664f19762249efd53b3922fef61c85e95c58c79d10c7e6615e4b25 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bey 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a937cc0dd0664f19762249efd53b3922fef61c85e95c58c79d10c7e6615e4b25 3 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a937cc0dd0664f19762249efd53b3922fef61c85e95c58c79d10c7e6615e4b25 3 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a937cc0dd0664f19762249efd53b3922fef61c85e95c58c79d10c7e6615e4b25 00:28:07.841 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bey 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bey 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.bey 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1228626 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 1228626 ']' 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.17H 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.xF8 ]] 00:28:08.103 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xF8 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.HcU 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.cC7 ]] 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cC7 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.104 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0gc 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.1Gq ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Gq 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.1IR 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.IaY ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.IaY 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.bey 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.366 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:08.367 11:51:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:11.673 Waiting for block devices as requested 00:28:11.673 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:11.673 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:11.934 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:11.934 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:11.934 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.193 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.193 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.193 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:12.454 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:12.454 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:12.715 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:12.715 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:12.715 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:12.715 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:12.976 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:12.976 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:12.976 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:13.920 No valid GPT data, bailing 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:13.920 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:14.182 00:28:14.182 Discovery Log Number of Records 2, Generation counter 2 00:28:14.182 =====Discovery Log Entry 0====== 00:28:14.182 trtype: tcp 00:28:14.182 adrfam: ipv4 00:28:14.182 subtype: current discovery subsystem 00:28:14.182 treq: not specified, sq flow control disable supported 00:28:14.182 portid: 1 00:28:14.182 trsvcid: 4420 00:28:14.182 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:14.182 traddr: 10.0.0.1 00:28:14.182 eflags: none 00:28:14.182 sectype: none 00:28:14.182 =====Discovery Log Entry 1====== 00:28:14.182 trtype: tcp 00:28:14.182 adrfam: ipv4 00:28:14.182 subtype: nvme subsystem 00:28:14.182 treq: not specified, sq flow control disable supported 00:28:14.182 portid: 1 00:28:14.182 trsvcid: 4420 00:28:14.182 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:14.182 traddr: 10.0.0.1 00:28:14.182 eflags: none 00:28:14.182 sectype: none 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.182 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.183 nvme0n1 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.183 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.444 nvme0n1 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.444 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.706 11:51:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.706 nvme0n1 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.706 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.968 nvme0n1 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:14.968 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.969 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.229 nvme0n1 00:28:15.229 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.229 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.229 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.229 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.229 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.229 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.230 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.490 nvme0n1 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.490 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.491 11:51:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.752 nvme0n1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.752 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.012 nvme0n1 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.012 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.013 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 nvme0n1 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.273 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.274 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:16.274 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.274 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.543 nvme0n1 00:28:16.543 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.543 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.543 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.543 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.544 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.545 11:51:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.808 nvme0n1 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.808 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.070 nvme0n1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.070 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.331 nvme0n1 00:28:17.331 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.331 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.331 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.331 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.331 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.331 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.592 11:51:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.853 nvme0n1 00:28:17.853 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.853 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.853 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.853 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.853 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.853 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.854 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.114 nvme0n1 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.114 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.115 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.374 nvme0n1 00:28:18.374 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.374 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.374 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.374 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.374 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.374 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:18.634 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.635 11:51:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.895 nvme0n1 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.895 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.896 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.156 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.428 nvme0n1 00:28:19.428 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.428 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.428 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.428 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.428 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.429 11:51:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 nvme0n1 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:20.002 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.003 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.573 nvme0n1 00:28:20.573 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.573 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.573 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.574 11:51:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.834 nvme0n1 00:28:20.834 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.834 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.834 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.834 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.834 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.834 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.094 11:51:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.665 nvme0n1 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.665 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.666 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.666 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.666 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 nvme0n1 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.605 11:51:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.176 nvme0n1 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.176 11:51:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 nvme0n1 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.749 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.009 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.010 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.010 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.010 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.581 nvme0n1 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.581 11:51:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.581 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.581 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.581 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.582 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.843 nvme0n1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.843 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 nvme0n1 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.105 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.106 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.367 nvme0n1 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.367 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.368 nvme0n1 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.368 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.630 11:51:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.630 nvme0n1 00:28:25.630 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.630 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.630 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.630 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.630 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.630 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:25.891 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.892 nvme0n1 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.892 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.153 nvme0n1 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.153 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:26.414 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 nvme0n1 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.415 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.677 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.678 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.678 11:51:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.678 nvme0n1 00:28:26.678 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.678 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.678 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.678 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.678 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.678 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:26.939 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.940 nvme0n1 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.940 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.200 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.200 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.200 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.200 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.200 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.200 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.201 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.462 nvme0n1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.462 11:51:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.724 nvme0n1 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.724 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.985 nvme0n1 00:28:27.985 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.985 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.985 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.985 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.985 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.985 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.245 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.246 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.246 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.246 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.246 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.506 nvme0n1 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.506 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.507 11:51:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.768 nvme0n1 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.768 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.341 nvme0n1 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.341 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.342 11:51:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 nvme0n1 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.915 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.177 nvme0n1 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.177 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.438 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.439 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.439 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.439 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.439 11:51:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.700 nvme0n1 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.700 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.961 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.221 nvme0n1 00:28:31.221 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.221 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.221 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.221 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.221 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.221 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.222 11:51:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 nvme0n1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.164 11:51:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.737 nvme0n1 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.737 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.738 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.309 nvme0n1 00:28:33.309 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.309 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.309 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.309 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.309 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.309 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.569 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.570 11:51:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.143 nvme0n1 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.143 11:51:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.086 nvme0n1 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:35.086 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.087 nvme0n1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.087 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.348 nvme0n1 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.348 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.610 nvme0n1 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:35.610 11:52:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.610 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.871 nvme0n1 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.871 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.133 nvme0n1 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.133 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.394 nvme0n1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.394 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.655 nvme0n1 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:36.655 11:52:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.655 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.916 nvme0n1 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.916 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.917 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.177 nvme0n1 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.177 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.438 nvme0n1 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.438 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.439 11:52:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.702 nvme0n1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.702 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.963 nvme0n1 00:28:37.963 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.963 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.963 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.963 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.964 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.964 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.964 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.964 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.964 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.964 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.225 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.486 nvme0n1 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.486 11:52:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.747 nvme0n1 00:28:38.747 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.747 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.747 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.747 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.747 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:38.748 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.051 nvme0n1 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.051 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.623 nvme0n1 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.623 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.624 11:52:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.624 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.884 nvme0n1 00:28:39.884 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.884 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.884 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.884 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.884 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.145 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.406 nvme0n1 00:28:40.406 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.406 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.406 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.406 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.406 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.406 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.667 11:52:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.928 nvme0n1 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.928 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.189 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.450 nvme0n1 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzQxODRjYjdlNDVmZDAwNDQ5MGRiYTI0MmQwMzJhMGMfn/Oq: 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: ]] 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmEzMTJkNzRhMzUyOWExNjUyMmE5NWU5ZWM4NDY5MDE5ZmUxNzU0M2E2MjExNDQyNmM2MTliNDk4NzdiOTljMWwEzAM=: 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.450 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.711 11:52:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.281 nvme0n1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.281 11:52:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 nvme0n1 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:42.852 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.114 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.114 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.114 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.685 nvme0n1 00:28:43.685 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.685 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.685 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.685 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.685 11:52:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBlZjFhMThjYjRkYmQxMThkNzEzNzc4MmJmNjdmOWRkODdiN2YzNzg2OTQ3YTU2F1gSSQ==: 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MzhkZDE4MzlmMWM4ODc5YTNmYTRlZmI0OGNlZjdjZjBE7Npp: 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:43.685 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:43.686 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.686 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.686 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.258 nvme0n1 00:28:44.258 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.258 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.258 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.258 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.258 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.258 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.518 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.518 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTkzN2NjMGRkMDY2NGYxOTc2MjI0OWVmZDUzYjM5MjJmZWY2MWM4NWU5NWM1OGM3OWQxMGM3ZTY2MTVlNGIyNR9ULdY=: 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.519 11:52:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.090 nvme0n1 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.090 request: 00:28:45.090 { 00:28:45.090 "name": "nvme0", 00:28:45.090 "trtype": "tcp", 00:28:45.090 "traddr": "10.0.0.1", 00:28:45.090 "adrfam": "ipv4", 00:28:45.090 "trsvcid": "4420", 00:28:45.090 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.090 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.090 "prchk_reftag": false, 00:28:45.090 "prchk_guard": false, 00:28:45.090 "hdgst": false, 00:28:45.090 "ddgst": false, 00:28:45.090 "allow_unrecognized_csi": false, 00:28:45.090 "method": "bdev_nvme_attach_controller", 00:28:45.090 "req_id": 1 00:28:45.090 } 00:28:45.090 Got JSON-RPC error response 00:28:45.090 response: 00:28:45.090 { 00:28:45.090 "code": -5, 00:28:45.090 "message": "Input/output error" 00:28:45.090 } 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.090 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.351 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:45.351 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:45.351 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.351 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.352 request: 00:28:45.352 { 00:28:45.352 "name": "nvme0", 00:28:45.352 "trtype": "tcp", 00:28:45.352 "traddr": "10.0.0.1", 00:28:45.352 "adrfam": "ipv4", 00:28:45.352 "trsvcid": "4420", 00:28:45.352 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.352 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.352 "prchk_reftag": false, 00:28:45.352 "prchk_guard": false, 00:28:45.352 "hdgst": false, 00:28:45.352 "ddgst": false, 00:28:45.352 "dhchap_key": "key2", 00:28:45.352 "allow_unrecognized_csi": false, 00:28:45.352 "method": "bdev_nvme_attach_controller", 00:28:45.352 "req_id": 1 00:28:45.352 } 00:28:45.352 Got JSON-RPC error response 00:28:45.352 response: 00:28:45.352 { 00:28:45.352 "code": -5, 00:28:45.352 "message": "Input/output error" 00:28:45.352 } 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.352 request: 00:28:45.352 { 00:28:45.352 "name": "nvme0", 00:28:45.352 "trtype": "tcp", 00:28:45.352 "traddr": "10.0.0.1", 00:28:45.352 "adrfam": "ipv4", 00:28:45.352 "trsvcid": "4420", 00:28:45.352 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:45.352 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:45.352 "prchk_reftag": false, 00:28:45.352 "prchk_guard": false, 00:28:45.352 "hdgst": false, 00:28:45.352 "ddgst": false, 00:28:45.352 "dhchap_key": "key1", 00:28:45.352 "dhchap_ctrlr_key": "ckey2", 00:28:45.352 "allow_unrecognized_csi": false, 00:28:45.352 "method": "bdev_nvme_attach_controller", 00:28:45.352 "req_id": 1 00:28:45.352 } 00:28:45.352 Got JSON-RPC error response 00:28:45.352 response: 00:28:45.352 { 00:28:45.352 "code": -5, 00:28:45.352 "message": "Input/output error" 00:28:45.352 } 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.352 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 nvme0n1 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.613 11:52:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.613 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.874 request: 00:28:45.874 { 00:28:45.874 "name": "nvme0", 00:28:45.874 "dhchap_key": "key1", 00:28:45.874 "dhchap_ctrlr_key": "ckey2", 00:28:45.874 "method": "bdev_nvme_set_keys", 00:28:45.874 "req_id": 1 00:28:45.874 } 00:28:45.874 Got JSON-RPC error response 00:28:45.874 response: 00:28:45.874 { 00:28:45.874 "code": -13, 00:28:45.874 "message": "Permission denied" 00:28:45.874 } 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:45.874 11:52:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:46.816 11:52:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:47.775 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.775 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:47.775 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.776 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.776 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjBjMDNkYzI1YzljMTEwNTA5ZDNkMDg4YzlhOTg1MmU4NjZlMDUzZjFkMWYzODExmfYEhA==: 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: ]] 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWFjZjUwNGJmNzhlNjFmYTM5MTNlNTA1Yjg4NGFmZTRkNzk5NGFmYzYyOThkY2E5rQl2Gw==: 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.068 nvme0n1 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:48.068 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGRkZWI4ZGEyNDE2OTYwNGE0YmYzZGJlMWJkZWIxMjncUftA: 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: ]] 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODUxMjQ1ODU0NGY3NDgyN2I0MDU0NzVlNzVlZjJiMWRwbbHS: 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.069 request: 00:28:48.069 { 00:28:48.069 "name": "nvme0", 00:28:48.069 "dhchap_key": "key2", 00:28:48.069 "dhchap_ctrlr_key": "ckey1", 00:28:48.069 "method": "bdev_nvme_set_keys", 00:28:48.069 "req_id": 1 00:28:48.069 } 00:28:48.069 Got JSON-RPC error response 00:28:48.069 response: 00:28:48.069 { 00:28:48.069 "code": -13, 00:28:48.069 "message": "Permission denied" 00:28:48.069 } 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.069 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:48.389 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:48.389 11:52:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:49.336 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.337 rmmod nvme_tcp 00:28:49.337 rmmod nvme_fabrics 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1228626 ']' 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1228626 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 1228626 ']' 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 1228626 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1228626 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1228626' 00:28:49.337 killing process with pid 1228626 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 1228626 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 1228626 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.337 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.597 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.597 11:52:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.510 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.510 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:51.510 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:51.511 11:52:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:55.718 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:55.718 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:55.718 11:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.17H /tmp/spdk.key-null.HcU /tmp/spdk.key-sha256.0gc /tmp/spdk.key-sha384.1IR /tmp/spdk.key-sha512.bey /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:55.718 11:52:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:59.024 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:59.024 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:59.024 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:59.285 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:59.285 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:59.285 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:59.546 00:28:59.546 real 1m0.811s 00:28:59.546 user 0m54.500s 00:28:59.546 sys 0m16.177s 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.546 ************************************ 00:28:59.546 END TEST nvmf_auth_host 00:28:59.546 ************************************ 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.546 ************************************ 00:28:59.546 START TEST nvmf_digest 00:28:59.546 ************************************ 00:28:59.546 11:52:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:59.808 * Looking for test storage... 00:28:59.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:59.808 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:59.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.809 --rc genhtml_branch_coverage=1 00:28:59.809 --rc genhtml_function_coverage=1 00:28:59.809 --rc genhtml_legend=1 00:28:59.809 --rc geninfo_all_blocks=1 00:28:59.809 --rc geninfo_unexecuted_blocks=1 00:28:59.809 00:28:59.809 ' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:59.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.809 --rc genhtml_branch_coverage=1 00:28:59.809 --rc genhtml_function_coverage=1 00:28:59.809 --rc genhtml_legend=1 00:28:59.809 --rc geninfo_all_blocks=1 00:28:59.809 --rc geninfo_unexecuted_blocks=1 00:28:59.809 00:28:59.809 ' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:59.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.809 --rc genhtml_branch_coverage=1 00:28:59.809 --rc genhtml_function_coverage=1 00:28:59.809 --rc genhtml_legend=1 00:28:59.809 --rc geninfo_all_blocks=1 00:28:59.809 --rc geninfo_unexecuted_blocks=1 00:28:59.809 00:28:59.809 ' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:59.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:59.809 --rc genhtml_branch_coverage=1 00:28:59.809 --rc genhtml_function_coverage=1 00:28:59.809 --rc genhtml_legend=1 00:28:59.809 --rc geninfo_all_blocks=1 00:28:59.809 --rc geninfo_unexecuted_blocks=1 00:28:59.809 00:28:59.809 ' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:59.809 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:59.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:59.810 11:52:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.956 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:07.957 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:07.957 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:07.957 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:07.957 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.589 ms 00:29:07.957 00:29:07.957 --- 10.0.0.2 ping statistics --- 00:29:07.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.957 rtt min/avg/max/mdev = 0.589/0.589/0.589/0.000 ms 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:29:07.957 00:29:07.957 --- 10.0.0.1 ping statistics --- 00:29:07.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.957 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:07.957 ************************************ 00:29:07.957 START TEST nvmf_digest_clean 00:29:07.957 ************************************ 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:07.957 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1245612 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1245612 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1245612 ']' 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:07.958 11:52:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:07.958 [2024-11-15 11:52:32.863682] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:07.958 [2024-11-15 11:52:32.863742] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.958 [2024-11-15 11:52:32.964199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.958 [2024-11-15 11:52:33.014803] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.958 [2024-11-15 11:52:33.014854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.958 [2024-11-15 11:52:33.014863] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.958 [2024-11-15 11:52:33.014870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.958 [2024-11-15 11:52:33.014877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.958 [2024-11-15 11:52:33.015627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.220 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:08.220 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:08.220 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:08.220 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:08.220 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.482 null0 00:29:08.482 [2024-11-15 11:52:33.823611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.482 [2024-11-15 11:52:33.847913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1245851 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1245851 /var/tmp/bperf.sock 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1245851 ']' 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.482 11:52:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:08.482 [2024-11-15 11:52:33.918180] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:08.482 [2024-11-15 11:52:33.918257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1245851 ] 00:29:08.743 [2024-11-15 11:52:34.002673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.743 [2024-11-15 11:52:34.062903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.316 11:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:09.316 11:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:09.316 11:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:09.316 11:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:09.316 11:52:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.576 11:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.576 11:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.149 nvme0n1 00:29:10.149 11:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:10.149 11:52:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.149 Running I/O for 2 seconds... 00:29:12.479 19506.00 IOPS, 76.20 MiB/s [2024-11-15T10:52:37.977Z] 22646.50 IOPS, 88.46 MiB/s 00:29:12.479 Latency(us) 00:29:12.479 [2024-11-15T10:52:37.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.479 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:12.479 nvme0n1 : 2.00 22660.02 88.52 0.00 0.00 5642.22 2293.76 18240.85 00:29:12.479 [2024-11-15T10:52:37.977Z] =================================================================================================================== 00:29:12.479 [2024-11-15T10:52:37.977Z] Total : 22660.02 88.52 0.00 0.00 5642.22 2293.76 18240.85 00:29:12.479 { 00:29:12.479 "results": [ 00:29:12.479 { 00:29:12.479 "job": "nvme0n1", 00:29:12.479 "core_mask": "0x2", 00:29:12.479 "workload": "randread", 00:29:12.479 "status": "finished", 00:29:12.479 "queue_depth": 128, 00:29:12.479 "io_size": 4096, 00:29:12.479 "runtime": 2.004897, 00:29:12.479 "iops": 22660.016948501594, 00:29:12.479 "mibps": 88.51569120508435, 00:29:12.479 "io_failed": 0, 00:29:12.479 "io_timeout": 0, 00:29:12.479 "avg_latency_us": 5642.224597888373, 00:29:12.479 "min_latency_us": 2293.76, 00:29:12.479 "max_latency_us": 18240.853333333333 00:29:12.479 } 00:29:12.479 ], 00:29:12.479 "core_count": 1 00:29:12.479 } 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:12.479 | select(.opcode=="crc32c") 00:29:12.479 | "\(.module_name) \(.executed)"' 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1245851 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1245851 ']' 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1245851 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1245851 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1245851' 00:29:12.479 killing process with pid 1245851 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1245851 00:29:12.479 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.479 00:29:12.479 Latency(us) 00:29:12.479 [2024-11-15T10:52:37.977Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.479 [2024-11-15T10:52:37.977Z] =================================================================================================================== 00:29:12.479 [2024-11-15T10:52:37.977Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1245851 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1246645 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1246645 /var/tmp/bperf.sock 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1246645 ']' 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:12.479 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.480 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:12.480 11:52:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:12.741 [2024-11-15 11:52:37.982019] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:12.741 [2024-11-15 11:52:37.982075] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246645 ] 00:29:12.741 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.741 Zero copy mechanism will not be used. 00:29:12.741 [2024-11-15 11:52:38.066041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.741 [2024-11-15 11:52:38.094024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:13.313 11:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:13.313 11:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:13.313 11:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:13.313 11:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:13.313 11:52:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.574 11:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.574 11:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.145 nvme0n1 00:29:14.145 11:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:14.145 11:52:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.145 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.145 Zero copy mechanism will not be used. 00:29:14.145 Running I/O for 2 seconds... 00:29:16.033 3496.00 IOPS, 437.00 MiB/s [2024-11-15T10:52:41.531Z] 3327.00 IOPS, 415.88 MiB/s 00:29:16.033 Latency(us) 00:29:16.033 [2024-11-15T10:52:41.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.033 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:16.033 nvme0n1 : 2.01 3326.48 415.81 0.00 0.00 4805.40 928.43 14854.83 00:29:16.033 [2024-11-15T10:52:41.531Z] =================================================================================================================== 00:29:16.033 [2024-11-15T10:52:41.531Z] Total : 3326.48 415.81 0.00 0.00 4805.40 928.43 14854.83 00:29:16.033 { 00:29:16.033 "results": [ 00:29:16.033 { 00:29:16.033 "job": "nvme0n1", 00:29:16.033 "core_mask": "0x2", 00:29:16.033 "workload": "randread", 00:29:16.033 "status": "finished", 00:29:16.033 "queue_depth": 16, 00:29:16.033 "io_size": 131072, 00:29:16.033 "runtime": 2.005124, 00:29:16.033 "iops": 3326.477564479803, 00:29:16.033 "mibps": 415.80969555997535, 00:29:16.033 "io_failed": 0, 00:29:16.033 "io_timeout": 0, 00:29:16.033 "avg_latency_us": 4805.401203398301, 00:29:16.033 "min_latency_us": 928.4266666666666, 00:29:16.033 "max_latency_us": 14854.826666666666 00:29:16.033 } 00:29:16.033 ], 00:29:16.033 "core_count": 1 00:29:16.033 } 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:16.294 | select(.opcode=="crc32c") 00:29:16.294 | "\(.module_name) \(.executed)"' 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1246645 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1246645 ']' 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1246645 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:16.294 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1246645 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1246645' 00:29:16.554 killing process with pid 1246645 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1246645 00:29:16.554 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.554 00:29:16.554 Latency(us) 00:29:16.554 [2024-11-15T10:52:42.052Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.554 [2024-11-15T10:52:42.052Z] =================================================================================================================== 00:29:16.554 [2024-11-15T10:52:42.052Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1246645 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1247332 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1247332 /var/tmp/bperf.sock 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1247332 ']' 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.554 11:52:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:16.554 [2024-11-15 11:52:41.954949] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:16.554 [2024-11-15 11:52:41.955002] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247332 ] 00:29:16.554 [2024-11-15 11:52:42.037495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.815 [2024-11-15 11:52:42.066750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.386 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:17.386 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:17.386 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:17.386 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:17.386 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:17.647 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.647 11:52:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.907 nvme0n1 00:29:17.907 11:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:17.907 11:52:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.907 Running I/O for 2 seconds... 00:29:20.231 29573.00 IOPS, 115.52 MiB/s [2024-11-15T10:52:45.729Z] 29715.00 IOPS, 116.07 MiB/s 00:29:20.231 Latency(us) 00:29:20.231 [2024-11-15T10:52:45.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.231 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.231 nvme0n1 : 2.00 29729.22 116.13 0.00 0.00 4300.62 2280.11 13107.20 00:29:20.231 [2024-11-15T10:52:45.729Z] =================================================================================================================== 00:29:20.231 [2024-11-15T10:52:45.729Z] Total : 29729.22 116.13 0.00 0.00 4300.62 2280.11 13107.20 00:29:20.231 { 00:29:20.231 "results": [ 00:29:20.231 { 00:29:20.231 "job": "nvme0n1", 00:29:20.231 "core_mask": "0x2", 00:29:20.231 "workload": "randwrite", 00:29:20.231 "status": "finished", 00:29:20.231 "queue_depth": 128, 00:29:20.231 "io_size": 4096, 00:29:20.231 "runtime": 2.003349, 00:29:20.231 "iops": 29729.218423749433, 00:29:20.231 "mibps": 116.12975946777122, 00:29:20.231 "io_failed": 0, 00:29:20.231 "io_timeout": 0, 00:29:20.231 "avg_latency_us": 4300.6171776531555, 00:29:20.231 "min_latency_us": 2280.1066666666666, 00:29:20.231 "max_latency_us": 13107.2 00:29:20.231 } 00:29:20.231 ], 00:29:20.231 "core_count": 1 00:29:20.231 } 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:20.231 | select(.opcode=="crc32c") 00:29:20.231 | "\(.module_name) \(.executed)"' 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1247332 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1247332 ']' 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1247332 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1247332 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1247332' 00:29:20.231 killing process with pid 1247332 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1247332 00:29:20.231 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.231 00:29:20.231 Latency(us) 00:29:20.231 [2024-11-15T10:52:45.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.231 [2024-11-15T10:52:45.729Z] =================================================================================================================== 00:29:20.231 [2024-11-15T10:52:45.729Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.231 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1247332 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1248022 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1248022 /var/tmp/bperf.sock 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 1248022 ']' 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:20.492 11:52:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.492 [2024-11-15 11:52:45.802751] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:20.492 [2024-11-15 11:52:45.802807] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1248022 ] 00:29:20.492 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.492 Zero copy mechanism will not be used. 00:29:20.492 [2024-11-15 11:52:45.887289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.492 [2024-11-15 11:52:45.916743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.431 11:52:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.691 nvme0n1 00:29:21.691 11:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:21.691 11:52:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.691 Zero copy mechanism will not be used. 00:29:21.691 Running I/O for 2 seconds... 00:29:24.013 5808.00 IOPS, 726.00 MiB/s [2024-11-15T10:52:49.511Z] 4552.00 IOPS, 569.00 MiB/s 00:29:24.013 Latency(us) 00:29:24.013 [2024-11-15T10:52:49.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.013 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:24.013 nvme0n1 : 2.01 4547.54 568.44 0.00 0.00 3512.08 1413.12 14527.15 00:29:24.013 [2024-11-15T10:52:49.511Z] =================================================================================================================== 00:29:24.013 [2024-11-15T10:52:49.511Z] Total : 4547.54 568.44 0.00 0.00 3512.08 1413.12 14527.15 00:29:24.013 { 00:29:24.013 "results": [ 00:29:24.013 { 00:29:24.013 "job": "nvme0n1", 00:29:24.013 "core_mask": "0x2", 00:29:24.013 "workload": "randwrite", 00:29:24.013 "status": "finished", 00:29:24.013 "queue_depth": 16, 00:29:24.013 "io_size": 131072, 00:29:24.013 "runtime": 2.006359, 00:29:24.013 "iops": 4547.541093094506, 00:29:24.013 "mibps": 568.4426366368133, 00:29:24.013 "io_failed": 0, 00:29:24.013 "io_timeout": 0, 00:29:24.013 "avg_latency_us": 3512.0790764284666, 00:29:24.013 "min_latency_us": 1413.12, 00:29:24.013 "max_latency_us": 14527.146666666667 00:29:24.013 } 00:29:24.013 ], 00:29:24.013 "core_count": 1 00:29:24.013 } 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.013 | select(.opcode=="crc32c") 00:29:24.013 | "\(.module_name) \(.executed)"' 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1248022 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1248022 ']' 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1248022 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1248022 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1248022' 00:29:24.013 killing process with pid 1248022 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1248022 00:29:24.013 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.013 00:29:24.013 Latency(us) 00:29:24.013 [2024-11-15T10:52:49.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.013 [2024-11-15T10:52:49.511Z] =================================================================================================================== 00:29:24.013 [2024-11-15T10:52:49.511Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.013 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1248022 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1245612 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 1245612 ']' 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 1245612 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1245612 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1245612' 00:29:24.273 killing process with pid 1245612 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 1245612 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 1245612 00:29:24.273 00:29:24.273 real 0m16.925s 00:29:24.273 user 0m33.619s 00:29:24.273 sys 0m3.682s 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.273 ************************************ 00:29:24.273 END TEST nvmf_digest_clean 00:29:24.273 ************************************ 00:29:24.273 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:24.274 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:24.274 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:24.274 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.534 ************************************ 00:29:24.534 START TEST nvmf_digest_error 00:29:24.534 ************************************ 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1248954 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1248954 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1248954 ']' 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:24.534 11:52:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.534 [2024-11-15 11:52:49.866710] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:24.534 [2024-11-15 11:52:49.866759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.534 [2024-11-15 11:52:49.957104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.534 [2024-11-15 11:52:49.987561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.534 [2024-11-15 11:52:49.987591] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.534 [2024-11-15 11:52:49.987597] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.534 [2024-11-15 11:52:49.987602] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.534 [2024-11-15 11:52:49.987606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.534 [2024-11-15 11:52:49.988071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.474 [2024-11-15 11:52:50.698016] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.474 null0 00:29:25.474 [2024-11-15 11:52:50.776939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.474 [2024-11-15 11:52:50.801135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1249081 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1249081 /var/tmp/bperf.sock 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1249081 ']' 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:25.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.474 11:52:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.474 [2024-11-15 11:52:50.856673] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:25.474 [2024-11-15 11:52:50.856718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249081 ] 00:29:25.474 [2024-11-15 11:52:50.940313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.474 [2024-11-15 11:52:50.969924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.416 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:26.416 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:26.416 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.416 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.417 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:26.417 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.417 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.417 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.417 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.417 11:52:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.677 nvme0n1 00:29:26.677 11:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:26.677 11:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.677 11:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.677 11:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.677 11:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.677 11:52:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.677 Running I/O for 2 seconds... 00:29:26.938 [2024-11-15 11:52:52.186745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.186773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.186781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.194765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.194786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.194794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.205956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.205974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.205981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.218098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.218117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.218124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.228920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.228937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.228944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.238015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.238033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.238040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.247142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.247159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.247165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.258231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.258249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.258256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.267985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.268002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.268009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.938 [2024-11-15 11:52:52.277009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.938 [2024-11-15 11:52:52.277026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.938 [2024-11-15 11:52:52.277033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.287407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.287425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:2840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.287432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.299106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.299123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.299130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.310885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.310902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.310909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.318709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.318727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.318737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.330075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.330093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.330099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.340794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.340812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.340818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.351097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.351115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.351122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.359429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.359447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.359453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.369405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.369423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.369429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.377317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.377334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.377340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.387381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.387402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.387409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.395297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.395314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.395320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.405031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.405051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.405058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.414945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.414962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.414969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.424193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.424210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.424217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.939 [2024-11-15 11:52:52.432524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:26.939 [2024-11-15 11:52:52.432542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.939 [2024-11-15 11:52:52.432549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.441812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.441829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.441836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.450632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.450649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.450656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.459916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.459934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.459940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.468458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.468475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.468481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.479117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.479134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.479141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.487271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.487288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.487295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.497973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.497991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.497997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.509725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.509743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.509749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.519127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.519145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.519152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.528321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.201 [2024-11-15 11:52:52.528339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.201 [2024-11-15 11:52:52.528346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.201 [2024-11-15 11:52:52.536921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.536939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.536946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.546037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.546055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.546061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.554061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.554079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.554086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.563967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.563985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.563995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.572652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.572670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.572677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.581324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.581342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.581348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.590867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.590884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.590891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.599484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.599501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.599507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.609335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.609351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.609358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.619191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.619209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.619215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.627483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.627501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.627507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.637157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.637175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.637182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.645646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.645665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.645672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.654236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.654254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.654261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.664625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.664643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:10258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.664649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.673343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.673363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.673373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.681031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.681048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.681055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.202 [2024-11-15 11:52:52.691964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.202 [2024-11-15 11:52:52.691981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.202 [2024-11-15 11:52:52.691991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.702028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.702046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.702053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.714239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.714257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.714264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.723216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.723233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.723242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.731895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.731913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.731920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.741055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.741072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.741079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.749868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.749885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.749892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.761001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.761019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.761028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.768973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.768990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.768997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.778175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.778193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.778201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.787565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.787582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.787588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.796711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.796728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.796735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.805807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.805828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.805834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.814712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.814730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.814736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.824216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.824235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.824243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.832279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.832297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.832304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.464 [2024-11-15 11:52:52.842226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.464 [2024-11-15 11:52:52.842246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.464 [2024-11-15 11:52:52.842255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.850277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.850294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.850300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.860116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.860139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.860149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.870815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.870835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.870841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.879211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.879229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.879236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.888649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.888668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.888675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.898138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.898156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.898162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.907582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.907600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:16439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.907607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.915480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.915497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.915504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.925615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.925633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.925640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.935342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.935360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.935366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.944285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.944303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.944310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.465 [2024-11-15 11:52:52.953727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.465 [2024-11-15 11:52:52.953748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:11958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.465 [2024-11-15 11:52:52.953758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:52.962985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:52.963008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.726 [2024-11-15 11:52:52.963018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:52.972483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:52.972505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.726 [2024-11-15 11:52:52.972514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:52.980804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:52.980822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.726 [2024-11-15 11:52:52.980829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:52.989352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:52.989370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.726 [2024-11-15 11:52:52.989377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:52.999116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:52.999133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.726 [2024-11-15 11:52:52.999140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:53.007045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:53.007063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.726 [2024-11-15 11:52:53.007069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.726 [2024-11-15 11:52:53.016979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.726 [2024-11-15 11:52:53.016996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.017003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.025484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.025501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.025508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.037384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.037401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.037407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.048627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.048648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.048654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.060107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.060125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.060131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.069747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.069765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.069772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.078715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.078733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.078739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.087186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.087204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.087210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.096529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.096548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.096557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.106140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.106157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.106164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.117173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.117191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.117197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.125354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.125372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.125379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.137139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.137157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.137163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.148702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.148720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:6752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.148726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.156630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.156648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.156655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.166458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.166478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.166485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 26563.00 IOPS, 103.76 MiB/s [2024-11-15T10:52:53.225Z] [2024-11-15 11:52:53.177808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.177827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.177834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.186013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.186030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.186037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.197135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.197153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.197159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.206852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.206870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.206877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.727 [2024-11-15 11:52:53.217974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.727 [2024-11-15 11:52:53.217995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.727 [2024-11-15 11:52:53.218002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.229343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.229361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.229368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.241071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.241088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.241095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.251875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.251893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.251900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.260905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.260922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.260929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.271531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.271549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.271556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.282768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.282787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.282794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.292044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.292062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.292069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.299807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.299824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.299831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.310851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.310869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.310876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.319019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.319036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.319042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.330956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.330973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.330979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.342209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.342227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17985 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.342233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.354402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.354421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.354427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.366107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.366126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.366132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.377946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.377964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.377971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.388869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.388888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.388895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.401546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.401570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.401580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.412608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.412627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.412633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.424793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.424811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.424818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.435881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.435898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.435904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.443981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.443999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.989 [2024-11-15 11:52:53.444006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.989 [2024-11-15 11:52:53.456471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.989 [2024-11-15 11:52:53.456489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.990 [2024-11-15 11:52:53.456496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.990 [2024-11-15 11:52:53.467472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.990 [2024-11-15 11:52:53.467490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.990 [2024-11-15 11:52:53.467497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.990 [2024-11-15 11:52:53.478429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:27.990 [2024-11-15 11:52:53.478447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.990 [2024-11-15 11:52:53.478453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.487145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.487163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.487170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.498549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.498573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.498579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.508761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.508779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.508785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.520585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.520603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.520609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.532040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.532058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.532065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.542915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.542932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.542938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.250 [2024-11-15 11:52:53.554576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.250 [2024-11-15 11:52:53.554595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.250 [2024-11-15 11:52:53.554601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.565107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.565125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.565131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.573473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.573491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.573498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.585015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.585032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.585042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.596509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.596527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:8703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.596534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.607700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.607718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.607724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.618355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.618372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.618379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.630450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.630468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.630475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.642832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.642850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.642857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.650840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.650856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.650863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.662306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.662323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.662329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.672626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.672643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.672649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.682056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.682078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.682085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.690434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.690451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.690458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.698937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.698954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.698960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.708300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.708318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.708324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.717644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.717662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.717669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.727054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.727071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.727078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.251 [2024-11-15 11:52:53.735142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.251 [2024-11-15 11:52:53.735160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.251 [2024-11-15 11:52:53.735166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.511 [2024-11-15 11:52:53.747030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.511 [2024-11-15 11:52:53.747050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.511 [2024-11-15 11:52:53.747060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.511 [2024-11-15 11:52:53.757492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.511 [2024-11-15 11:52:53.757513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.511 [2024-11-15 11:52:53.757521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.511 [2024-11-15 11:52:53.765594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.511 [2024-11-15 11:52:53.765618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.511 [2024-11-15 11:52:53.765625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.511 [2024-11-15 11:52:53.776326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.511 [2024-11-15 11:52:53.776344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.511 [2024-11-15 11:52:53.776350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.511 [2024-11-15 11:52:53.788302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.511 [2024-11-15 11:52:53.788320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.511 [2024-11-15 11:52:53.788326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.511 [2024-11-15 11:52:53.800103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.511 [2024-11-15 11:52:53.800119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.800125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.811397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.811416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.811422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.823251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.823269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.823276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.835126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.835144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.835150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.845511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.845529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:7936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.845535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.853925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.853943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.853952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.865790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.865814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.877660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.877678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.877685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.889558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.889579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.889585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.900509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.900529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.900535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.911553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.911586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.911596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.919853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.919870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.919877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.930289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.930307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.930313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.939775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.939792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.939798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.948396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.948416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.948423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.957516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.957533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.957540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.965535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.965553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.965559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.975450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.975468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.975475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.984117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.984135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.984141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:53.992845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:53.992862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:53.992868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.512 [2024-11-15 11:52:54.001927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.512 [2024-11-15 11:52:54.001945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.512 [2024-11-15 11:52:54.001951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.011078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.011095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.011102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.019496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.019513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.019519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.029189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.029206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.029213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.038596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.038613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.038622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.046553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.046574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.046581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.056643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.056661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.065874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.065893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.065899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.077642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.077666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.077673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.087953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.087971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.087977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.097787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.097805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.097812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.108717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.108739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.108750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.116821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.116838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.116845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.129080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.129098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.129104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.139593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.139610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.139616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.147502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.147519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.147526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.159530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.159547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.159554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 [2024-11-15 11:52:54.170632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1881040) 00:29:28.773 [2024-11-15 11:52:54.170650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.773 [2024-11-15 11:52:54.170657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.773 25637.00 IOPS, 100.14 MiB/s 00:29:28.773 Latency(us) 00:29:28.773 [2024-11-15T10:52:54.271Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.773 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:28.773 nvme0n1 : 2.00 25662.91 100.25 0.00 0.00 4982.80 2239.15 17367.04 00:29:28.773 [2024-11-15T10:52:54.271Z] =================================================================================================================== 00:29:28.773 [2024-11-15T10:52:54.271Z] Total : 25662.91 100.25 0.00 0.00 4982.80 2239.15 17367.04 00:29:28.773 { 00:29:28.773 "results": [ 00:29:28.773 { 00:29:28.773 "job": "nvme0n1", 00:29:28.773 "core_mask": "0x2", 00:29:28.773 "workload": "randread", 00:29:28.773 "status": "finished", 00:29:28.773 "queue_depth": 128, 00:29:28.773 "io_size": 4096, 00:29:28.773 "runtime": 2.003865, 00:29:28.773 "iops": 25662.906433317614, 00:29:28.773 "mibps": 100.24572825514693, 00:29:28.773 "io_failed": 0, 00:29:28.773 "io_timeout": 0, 00:29:28.773 "avg_latency_us": 4982.804111618862, 00:29:28.773 "min_latency_us": 2239.1466666666665, 00:29:28.773 "max_latency_us": 17367.04 00:29:28.773 } 00:29:28.773 ], 00:29:28.773 "core_count": 1 00:29:28.773 } 00:29:28.773 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.773 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.774 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.774 | .driver_specific 00:29:28.774 | .nvme_error 00:29:28.774 | .status_code 00:29:28.774 | .command_transient_transport_error' 00:29:28.774 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1249081 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1249081 ']' 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1249081 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1249081 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1249081' 00:29:29.033 killing process with pid 1249081 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1249081 00:29:29.033 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.033 00:29:29.033 Latency(us) 00:29:29.033 [2024-11-15T10:52:54.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.033 [2024-11-15T10:52:54.531Z] =================================================================================================================== 00:29:29.033 [2024-11-15T10:52:54.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.033 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1249081 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1249761 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1249761 /var/tmp/bperf.sock 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1249761 ']' 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:29.293 11:52:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.293 [2024-11-15 11:52:54.592142] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:29.293 [2024-11-15 11:52:54.592197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1249761 ] 00:29:29.293 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.293 Zero copy mechanism will not be used. 00:29:29.293 [2024-11-15 11:52:54.674330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.294 [2024-11-15 11:52:54.703756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.232 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.233 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.233 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.492 nvme0n1 00:29:30.492 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:30.492 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:30.492 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.492 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:30.492 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.492 11:52:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.753 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.753 Zero copy mechanism will not be used. 00:29:30.753 Running I/O for 2 seconds... 00:29:30.753 [2024-11-15 11:52:56.046958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.046989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.046997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.057550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.057576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.057583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.067232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.067252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.067259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.073343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.073361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.073368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.076464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.076483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.076489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.082890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.082908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.082915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.093905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.093924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.093931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.106015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.106033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.106040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.118175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.118193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.118200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.130314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.130333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.130343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.142670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.142688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.142694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.154412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.154431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.154437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.165451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.165469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.165475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.175829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.175847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.175853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.181664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.181681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.181688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.190608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.190626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.190633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.200915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.200933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.200939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.212575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.212593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.212599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.217776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.217798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.217804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.228078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.753 [2024-11-15 11:52:56.228096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.753 [2024-11-15 11:52:56.228102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.753 [2024-11-15 11:52:56.233730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.754 [2024-11-15 11:52:56.233747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.754 [2024-11-15 11:52:56.233753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:30.754 [2024-11-15 11:52:56.242856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:30.754 [2024-11-15 11:52:56.242874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.754 [2024-11-15 11:52:56.242880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.252806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.252825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.252831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.262124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.262142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.262148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.266355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.266373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.266380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.271395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.271413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.271419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.275714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.275731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.275738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.285299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.285316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.285322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.295011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.295029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.295035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.303525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.303543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.303550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.310528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.310546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.310553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.319833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.319852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.319858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.328253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.328271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.328277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.339136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.339155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.339161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.350228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.350247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.350253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.360681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.360699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.360709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.372096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.372115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.372121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.384555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.384578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.384584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.396428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.396445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.396451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.408235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.408254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.408260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.419216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.419234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.419241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.430977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.430996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.431002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.443257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.443275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.443282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.455652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.455670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.455676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.463659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.463678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.463684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.469788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.015 [2024-11-15 11:52:56.469806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.015 [2024-11-15 11:52:56.469812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.015 [2024-11-15 11:52:56.479552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.016 [2024-11-15 11:52:56.479575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.016 [2024-11-15 11:52:56.479581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.016 [2024-11-15 11:52:56.490542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.016 [2024-11-15 11:52:56.490561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.016 [2024-11-15 11:52:56.490572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.016 [2024-11-15 11:52:56.495418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.016 [2024-11-15 11:52:56.495437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.016 [2024-11-15 11:52:56.495443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.016 [2024-11-15 11:52:56.500257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.016 [2024-11-15 11:52:56.500275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.016 [2024-11-15 11:52:56.500282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.016 [2024-11-15 11:52:56.505101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.016 [2024-11-15 11:52:56.505119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.016 [2024-11-15 11:52:56.505126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.016 [2024-11-15 11:52:56.509626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.016 [2024-11-15 11:52:56.509645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.016 [2024-11-15 11:52:56.509651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.517287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.517306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.517316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.525898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.525916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.533212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.533231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.533237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.538884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.538903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.538909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.546215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.546233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.546239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.552522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.552540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.552546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.557682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.557700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.557706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.564474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.277 [2024-11-15 11:52:56.564492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.277 [2024-11-15 11:52:56.564498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.277 [2024-11-15 11:52:56.570087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.570105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.570111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.577800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.577821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.577827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.587307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.587325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.587332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.596397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.596416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.596422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.604145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.604164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.604170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.612581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.612599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.612606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.615110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.615128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.615134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.619293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.619311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.619317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.623791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.623810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.623816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.632338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.632357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.632363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.637517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.637536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.637542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.641844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.641861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.641867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.650061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.650079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.650085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.655530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.655549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.655555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.666739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.666757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.666764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.676101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.676121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.676127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.686649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.686667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.686673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.699157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.699176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.699182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.704839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.704857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.704866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.713799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.713818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.713824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.724754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.724772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.724779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.732728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.732747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.732753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.741453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.741472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.741478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.746129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.746154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.754867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.278 [2024-11-15 11:52:56.754885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.278 [2024-11-15 11:52:56.754892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.278 [2024-11-15 11:52:56.760989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.279 [2024-11-15 11:52:56.761008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.279 [2024-11-15 11:52:56.761014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.279 [2024-11-15 11:52:56.765502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.279 [2024-11-15 11:52:56.765521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.279 [2024-11-15 11:52:56.765527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.279 [2024-11-15 11:52:56.772317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.279 [2024-11-15 11:52:56.772339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.279 [2024-11-15 11:52:56.772345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.783403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.783423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.783429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.789296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.789314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.789320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.793924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.793942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.793948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.804896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.804915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.804921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.811859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.811877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.811883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.819772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.819790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.819797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.827529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.827548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.827554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.833013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.833031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.833038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.842678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.842695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.842701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.852833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.852851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.852858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.859056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.859074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.859080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.867768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.867786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.867792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.873472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.873490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.873497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.877850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.877868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.877875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.884084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.884102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.884108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.888542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.888560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.888572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.895745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.895764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.895774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.907045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.907064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.907070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.918109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.918128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.918134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.927482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.927500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.927506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.938838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.938857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.938863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.948316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.948335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.948341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.960163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.960182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.960188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.969354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.969372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.969378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.975480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.975498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.541 [2024-11-15 11:52:56.975504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.541 [2024-11-15 11:52:56.981919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.541 [2024-11-15 11:52:56.981940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:56.981947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.542 [2024-11-15 11:52:56.991505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.542 [2024-11-15 11:52:56.991524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:56.991530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.542 [2024-11-15 11:52:56.995991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.542 [2024-11-15 11:52:56.996010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:56.996016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.542 [2024-11-15 11:52:57.000658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.542 [2024-11-15 11:52:57.000676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:57.000682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.542 [2024-11-15 11:52:57.012141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.542 [2024-11-15 11:52:57.012159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:57.012166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.542 [2024-11-15 11:52:57.023154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.542 [2024-11-15 11:52:57.023173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:57.023179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.542 [2024-11-15 11:52:57.035532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.542 [2024-11-15 11:52:57.035550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.542 [2024-11-15 11:52:57.035557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.804 3692.00 IOPS, 461.50 MiB/s [2024-11-15T10:52:57.302Z] [2024-11-15 11:52:57.047609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.047628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.047635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.057922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.057940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.057947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.062313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.062331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.062337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.066867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.066885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.066892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.071179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.071197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.071204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.075327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.075351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.079720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.079738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.079744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.091005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.091023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.091029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.102649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.102667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.102674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.114353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.114371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.114377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.125860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.125879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.125891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.135221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.135245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.144398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.144416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.144422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.148784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.148801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.148807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.158090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.158108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.158114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.162409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.162426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.162433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.166967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.166985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.166991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.171476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.171494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.171500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.178822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.178840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.178846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.183602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.183620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.804 [2024-11-15 11:52:57.183626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.804 [2024-11-15 11:52:57.192929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.804 [2024-11-15 11:52:57.192947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.192953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.197425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.197443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.197449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.201820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.201838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.201844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.206159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.206177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.206184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.210649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.210666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.210672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.216922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.216940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.216947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.221502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.221520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.221526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.227846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.227864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.227873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.238064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.238082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.238088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.246522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.246539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.246546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.256372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.256390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.256397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.260710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.260728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.260734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.268656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.268674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.268681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.275883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.275901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.275907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.286833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.286851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.286857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:31.805 [2024-11-15 11:52:57.297575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:31.805 [2024-11-15 11:52:57.297593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.805 [2024-11-15 11:52:57.297599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.065 [2024-11-15 11:52:57.304850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.065 [2024-11-15 11:52:57.304873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-11-15 11:52:57.304879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.065 [2024-11-15 11:52:57.309686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.065 [2024-11-15 11:52:57.309703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-11-15 11:52:57.309709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.065 [2024-11-15 11:52:57.318107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.065 [2024-11-15 11:52:57.318125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-11-15 11:52:57.318131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.065 [2024-11-15 11:52:57.322435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.065 [2024-11-15 11:52:57.322454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.065 [2024-11-15 11:52:57.322459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.065 [2024-11-15 11:52:57.329240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.329257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.329263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.335772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.335790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.335797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.344203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.344221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.344228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.352992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.353010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.353016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.362763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.362782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.362788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.374547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.374570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.385160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.385178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.385185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.394732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.394750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.394756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.402163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.402181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.402187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.411551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.411575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.411581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.423080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.423098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.423104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.435709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.435727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.435733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.448246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.448264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.448270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.461604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.461622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.461632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.471951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.471969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.471975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.477749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.477767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.477773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.486825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.486844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.486850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.497505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.497523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.066 [2024-11-15 11:52:57.497529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.066 [2024-11-15 11:52:57.505794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.066 [2024-11-15 11:52:57.505812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.505819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.067 [2024-11-15 11:52:57.515696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.067 [2024-11-15 11:52:57.515714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.515721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.067 [2024-11-15 11:52:57.523470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.067 [2024-11-15 11:52:57.523488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.523494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.067 [2024-11-15 11:52:57.532316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.067 [2024-11-15 11:52:57.532333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.532339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.067 [2024-11-15 11:52:57.540772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.067 [2024-11-15 11:52:57.540794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.540800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.067 [2024-11-15 11:52:57.550687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.067 [2024-11-15 11:52:57.550704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.550711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.067 [2024-11-15 11:52:57.561527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.067 [2024-11-15 11:52:57.561545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.067 [2024-11-15 11:52:57.561551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.327 [2024-11-15 11:52:57.570049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.327 [2024-11-15 11:52:57.570067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.327 [2024-11-15 11:52:57.570074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.327 [2024-11-15 11:52:57.579964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.327 [2024-11-15 11:52:57.579981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.327 [2024-11-15 11:52:57.579988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.327 [2024-11-15 11:52:57.584597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.327 [2024-11-15 11:52:57.584615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.327 [2024-11-15 11:52:57.584621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.327 [2024-11-15 11:52:57.592528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.327 [2024-11-15 11:52:57.592546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.327 [2024-11-15 11:52:57.592552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.327 [2024-11-15 11:52:57.603019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.327 [2024-11-15 11:52:57.603036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.603043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.607461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.607478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.607485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.613616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.613641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.613647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.619274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.619292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.619298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.626624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.626642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.626648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.633299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.633317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.633323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.637668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.637686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.637692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.647927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.647945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.647952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.653172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.653191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.653197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.657758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.657777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.657783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.665547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.665569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.665578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.669979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.669997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.670003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.681330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.681348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.681354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.692637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.692655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.692661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.704154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.704172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.704178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.714867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.714886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.714892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.725897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.725915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.725921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.738105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.738122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.738129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.748682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.748700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.748706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.757540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.757566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.757572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.767028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.767046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.767052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.775747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.775765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.775771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.786465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.786483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.786489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.799278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.799295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.799301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.810190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.810207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.810213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.328 [2024-11-15 11:52:57.822547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.328 [2024-11-15 11:52:57.822569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.328 [2024-11-15 11:52:57.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.830849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.830867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.830874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.833731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.833749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.833755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.843000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.843017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.843023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.847460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.847477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.847483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.851907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.851924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.851930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.857938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.857955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.857961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.863880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.863897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.863904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.867246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.867264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.867271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.874793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.874811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.874817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.883012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.883030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.883037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.894229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.894247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.894256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.904496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.904514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.590 [2024-11-15 11:52:57.904520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.590 [2024-11-15 11:52:57.911774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.590 [2024-11-15 11:52:57.911791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.911798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.920312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.920330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.920336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.932020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.932038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.932044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.943552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.943575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.943581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.954281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.954300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.954307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.965987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.966006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.966012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.972320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.972338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.972344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.976845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.976864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.976870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.982241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.982259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.982265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.986571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.986589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.986595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:57.993742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:57.993760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:57.993767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:58.001075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:58.001093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:58.001100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:58.009480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:58.009498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:58.009504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:58.014124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:58.014142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:58.014149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:58.022609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:58.022627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:58.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:58.031348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:58.031366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:58.031376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:32.591 [2024-11-15 11:52:58.039345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xfca870) 00:29:32.591 [2024-11-15 11:52:58.039363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.591 [2024-11-15 11:52:58.039369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:32.591 3770.00 IOPS, 471.25 MiB/s 00:29:32.591 Latency(us) 00:29:32.591 [2024-11-15T10:52:58.089Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.591 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:32.591 nvme0n1 : 2.00 3769.80 471.23 0.00 0.00 4241.49 754.35 15400.96 00:29:32.591 [2024-11-15T10:52:58.089Z] =================================================================================================================== 00:29:32.591 [2024-11-15T10:52:58.089Z] Total : 3769.80 471.23 0.00 0.00 4241.49 754.35 15400.96 00:29:32.591 { 00:29:32.591 "results": [ 00:29:32.591 { 00:29:32.591 "job": "nvme0n1", 00:29:32.591 "core_mask": "0x2", 00:29:32.591 "workload": "randread", 00:29:32.591 "status": "finished", 00:29:32.591 "queue_depth": 16, 00:29:32.591 "io_size": 131072, 00:29:32.591 "runtime": 2.004349, 00:29:32.591 "iops": 3769.8025643238775, 00:29:32.591 "mibps": 471.2253205404847, 00:29:32.591 "io_failed": 0, 00:29:32.591 "io_timeout": 0, 00:29:32.591 "avg_latency_us": 4241.48700899947, 00:29:32.591 "min_latency_us": 754.3466666666667, 00:29:32.591 "max_latency_us": 15400.96 00:29:32.591 } 00:29:32.591 ], 00:29:32.591 "core_count": 1 00:29:32.591 } 00:29:32.591 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.591 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.591 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.591 | .driver_specific 00:29:32.591 | .nvme_error 00:29:32.591 | .status_code 00:29:32.591 | .command_transient_transport_error' 00:29:32.591 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 244 > 0 )) 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1249761 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1249761 ']' 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1249761 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1249761 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1249761' 00:29:32.852 killing process with pid 1249761 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1249761 00:29:32.852 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.852 00:29:32.852 Latency(us) 00:29:32.852 [2024-11-15T10:52:58.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.852 [2024-11-15T10:52:58.350Z] =================================================================================================================== 00:29:32.852 [2024-11-15T10:52:58.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.852 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1249761 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1250587 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1250587 /var/tmp/bperf.sock 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1250587 ']' 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.113 11:52:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.113 [2024-11-15 11:52:58.463276] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:33.113 [2024-11-15 11:52:58.463335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1250587 ] 00:29:33.113 [2024-11-15 11:52:58.546696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.113 [2024-11-15 11:52:58.576049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.052 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:34.052 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:34.052 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.053 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.312 nvme0n1 00:29:34.573 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:34.573 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:34.573 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.573 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:34.573 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.573 11:52:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.573 Running I/O for 2 seconds... 00:29:34.573 [2024-11-15 11:52:59.924087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eed4e8 00:29:34.573 [2024-11-15 11:52:59.925129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.925156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.932752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eec408 00:29:34.573 [2024-11-15 11:52:59.933773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.933791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.942466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eeb328 00:29:34.573 [2024-11-15 11:52:59.943945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.943961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.950922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee2c28 00:29:34.573 [2024-11-15 11:52:59.952208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:20786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.952225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.958350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.573 [2024-11-15 11:52:59.959648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.959664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.966297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee73e0 00:29:34.573 [2024-11-15 11:52:59.967093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.967109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.975330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eed920 00:29:34.573 [2024-11-15 11:52:59.976237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.976261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.984233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee4140 00:29:34.573 [2024-11-15 11:52:59.984912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.984928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:52:59.992857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef0788 00:29:34.573 [2024-11-15 11:52:59.993535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:52:59.993551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:53:00.001464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee23b8 00:29:34.573 [2024-11-15 11:53:00.002781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:25530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:53:00.002800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:53:00.010577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee4140 00:29:34.573 [2024-11-15 11:53:00.011372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:53:00.011388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:53:00.019214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef0788 00:29:34.573 [2024-11-15 11:53:00.020004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:53:00.020020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:53:00.027857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee23b8 00:29:34.573 [2024-11-15 11:53:00.028647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.573 [2024-11-15 11:53:00.028663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.573 [2024-11-15 11:53:00.036446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee4140 00:29:34.574 [2024-11-15 11:53:00.037273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.574 [2024-11-15 11:53:00.037290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.574 [2024-11-15 11:53:00.045071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef0788 00:29:34.574 [2024-11-15 11:53:00.045908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.574 [2024-11-15 11:53:00.045924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:34.574 [2024-11-15 11:53:00.054921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee0ea0 00:29:34.574 [2024-11-15 11:53:00.056141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.574 [2024-11-15 11:53:00.056161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.574 [2024-11-15 11:53:00.063248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef92c0 00:29:34.574 [2024-11-15 11:53:00.064465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.574 [2024-11-15 11:53:00.064481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.070761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.071647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.071663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.079389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.080301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:12611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.080316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.087996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.088859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.088875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.096599] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.097493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.097509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.105157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.106055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.106071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.113735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.114582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.114598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.122312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.123223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.123239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.130898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.131815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.131833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.139474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.140389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.140407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.148054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.148953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.148969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.156634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.157522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.157539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.165393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.166317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.166333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.173990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.174891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.174908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.182555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.183458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.183474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.191118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.192019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.192036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.199688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.200541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.200558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.208256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.209145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.836 [2024-11-15 11:53:00.209162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.836 [2024-11-15 11:53:00.216835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.836 [2024-11-15 11:53:00.217752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.217768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.225412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.226307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.226323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.233985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.234896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.234912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.242546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.243458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.243474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.251128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.252045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.252061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.259704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.260587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:11861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.260603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.268287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.269165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.269181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.276859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.277771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.277790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.285425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.286292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.286308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.294002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.294912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.294929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.302578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.303483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.303500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.311145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.312059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.312076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.319720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.320604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.320620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:34.837 [2024-11-15 11:53:00.328276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:34.837 [2024-11-15 11:53:00.329177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.837 [2024-11-15 11:53:00.329194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.099 [2024-11-15 11:53:00.336835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.099 [2024-11-15 11:53:00.337735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.099 [2024-11-15 11:53:00.337751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.099 [2024-11-15 11:53:00.345425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.099 [2024-11-15 11:53:00.346326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.099 [2024-11-15 11:53:00.346342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.099 [2024-11-15 11:53:00.353995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.099 [2024-11-15 11:53:00.354891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:2318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.099 [2024-11-15 11:53:00.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.099 [2024-11-15 11:53:00.362567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.099 [2024-11-15 11:53:00.363456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.099 [2024-11-15 11:53:00.363472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.099 [2024-11-15 11:53:00.371128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.099 [2024-11-15 11:53:00.372020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.372036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.379696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.100 [2024-11-15 11:53:00.380601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.380617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.388260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.100 [2024-11-15 11:53:00.389170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.389186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.396833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.100 [2024-11-15 11:53:00.397722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.397738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.405407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eef270 00:29:35.100 [2024-11-15 11:53:00.406309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.406325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.414249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef4b08 00:29:35.100 [2024-11-15 11:53:00.414891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.414909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.422992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef2948 00:29:35.100 [2024-11-15 11:53:00.423989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.424006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.431549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef0788 00:29:35.100 [2024-11-15 11:53:00.432574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.432590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.440142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee0630 00:29:35.100 [2024-11-15 11:53:00.441135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.441152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.448721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee1b48 00:29:35.100 [2024-11-15 11:53:00.449709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.449725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.457298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef6890 00:29:35.100 [2024-11-15 11:53:00.458317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.458333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.465856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efa3a0 00:29:35.100 [2024-11-15 11:53:00.466854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.466870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.474406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efe2e8 00:29:35.100 [2024-11-15 11:53:00.475419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.475435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.482983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ede8a8 00:29:35.100 [2024-11-15 11:53:00.483955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.483972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.491559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee95a0 00:29:35.100 [2024-11-15 11:53:00.492520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.492537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.500128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee84c0 00:29:35.100 [2024-11-15 11:53:00.501116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.501136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.508701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee73e0 00:29:35.100 [2024-11-15 11:53:00.509714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.509729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.517291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee6300 00:29:35.100 [2024-11-15 11:53:00.518308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.518324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.525867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef5378 00:29:35.100 [2024-11-15 11:53:00.526840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.526856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.534569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efb048 00:29:35.100 [2024-11-15 11:53:00.535579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.535596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.543148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee01f8 00:29:35.100 [2024-11-15 11:53:00.544138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:18178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.544153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.551718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef46d0 00:29:35.100 [2024-11-15 11:53:00.552733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.552748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.560279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef35f0 00:29:35.100 [2024-11-15 11:53:00.561296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.561313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.568857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef0350 00:29:35.100 [2024-11-15 11:53:00.569864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.569880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.577434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef1430 00:29:35.100 [2024-11-15 11:53:00.578405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.578421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.586008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee0a68 00:29:35.100 [2024-11-15 11:53:00.587018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.100 [2024-11-15 11:53:00.587033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.100 [2024-11-15 11:53:00.594607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee1f80 00:29:35.363 [2024-11-15 11:53:00.595612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.595629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.603198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef9f68 00:29:35.363 [2024-11-15 11:53:00.604212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.604227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.611757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efdeb0 00:29:35.363 [2024-11-15 11:53:00.612628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.612644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.620319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ede470 00:29:35.363 [2024-11-15 11:53:00.621331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.621346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.628890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef4b08 00:29:35.363 [2024-11-15 11:53:00.629896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.629913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.637488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee88f8 00:29:35.363 [2024-11-15 11:53:00.638479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:9791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.638494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.646070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee7818 00:29:35.363 [2024-11-15 11:53:00.647060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.647076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.654615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee6738 00:29:35.363 [2024-11-15 11:53:00.655620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.655636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.663196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef57b0 00:29:35.363 [2024-11-15 11:53:00.664194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.664210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.671778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efb480 00:29:35.363 [2024-11-15 11:53:00.672786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.672802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.680375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efbcf0 00:29:35.363 [2024-11-15 11:53:00.681377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.681393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.688947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edf550 00:29:35.363 [2024-11-15 11:53:00.689936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:18171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.689952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.363 [2024-11-15 11:53:00.697506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef3a28 00:29:35.363 [2024-11-15 11:53:00.698506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.363 [2024-11-15 11:53:00.698522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.706088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef2948 00:29:35.364 [2024-11-15 11:53:00.707086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.707102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.714644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef0788 00:29:35.364 [2024-11-15 11:53:00.715627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.715643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.723219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee0630 00:29:35.364 [2024-11-15 11:53:00.724208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.724227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.731794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee1b48 00:29:35.364 [2024-11-15 11:53:00.732807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.732823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.740357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ef6890 00:29:35.364 [2024-11-15 11:53:00.741365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17463 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.741382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.748933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efa3a0 00:29:35.364 [2024-11-15 11:53:00.749930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.749946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.757492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efe2e8 00:29:35.364 [2024-11-15 11:53:00.758498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.758513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.766069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ede8a8 00:29:35.364 [2024-11-15 11:53:00.766938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.766954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.774916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efe720 00:29:35.364 [2024-11-15 11:53:00.776029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.776045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.783496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016eeb760 00:29:35.364 [2024-11-15 11:53:00.784619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:2178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.784635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.790812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee4578 00:29:35.364 [2024-11-15 11:53:00.791584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.791600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.799378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016ee7c50 00:29:35.364 [2024-11-15 11:53:00.800152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.800169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.807957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016efeb58 00:29:35.364 [2024-11-15 11:53:00.808714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.808729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.817144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.364 [2024-11-15 11:53:00.817641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.817658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.825997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.364 [2024-11-15 11:53:00.826244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.826260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.834892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.364 [2024-11-15 11:53:00.835132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:8590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.835148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.843796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.364 [2024-11-15 11:53:00.844030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.844047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.364 [2024-11-15 11:53:00.852662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.364 [2024-11-15 11:53:00.852881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.364 [2024-11-15 11:53:00.852898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.861483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.861774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.861790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.870379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.870640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.870656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.879274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.879500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.879516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.888153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.888382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.888398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.896986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.897193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.897209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.905828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.906068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.906084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 29399.00 IOPS, 114.84 MiB/s [2024-11-15T10:53:01.124Z] [2024-11-15 11:53:00.914698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.914948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.914964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.923600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.923839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.923856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.932567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.932809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.932825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.941420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.941676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.941693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.950238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.950466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.950487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.959083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.959342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.959359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.967930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.968201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.968218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.976794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.977045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.977061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.985651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.985868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.626 [2024-11-15 11:53:00.985884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.626 [2024-11-15 11:53:00.994531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.626 [2024-11-15 11:53:00.994764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:00.994781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.003354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.003589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.003606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.012267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.012494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.012510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.021144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.021356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.021372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.030003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.030230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.030248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.038880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.039120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.039136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.047775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.048023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.048039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.056613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.056843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.056859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.065460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.065699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.065715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.074339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.074576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.074593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.083156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.083416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.083432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.092000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.092238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.092254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.100890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:23152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.101164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.109707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.109956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.109972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.627 [2024-11-15 11:53:01.118603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.627 [2024-11-15 11:53:01.118889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.627 [2024-11-15 11:53:01.118905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.127487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.127728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.127744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.136295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.136521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.136537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.145170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.145412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.145428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.153993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.154283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.154300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.163003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.163219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.163235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.171916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.172140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.172156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.180746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.180855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.180874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.189611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.189841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.189857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.198549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.198800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:9899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.198816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.207448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.888 [2024-11-15 11:53:01.207677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.888 [2024-11-15 11:53:01.207694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.888 [2024-11-15 11:53:01.216275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.216494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:8545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.216510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.225146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.225351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.225368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.233987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.234241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.234257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.242892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.243168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.243184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.251723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.252022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.252038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.260631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.260882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.260898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.269491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.269726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.269742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.278352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.278584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.278601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.287208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.287429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.287444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.296128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.296324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.296341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.305043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.305290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.305306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.313862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.314147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.314163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.322690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.322905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.322921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.331580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.331838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.331855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.340456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.340709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.340725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.349332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.349587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.349603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.358262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.358470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.358486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.367117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.367355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.367371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:35.889 [2024-11-15 11:53:01.375969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:35.889 [2024-11-15 11:53:01.376230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.889 [2024-11-15 11:53:01.376246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.384821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.385055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.385071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.393695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.393891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.393907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.402501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.402745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.402762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.411369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.411665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.411684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.420220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.420443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.420459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.429077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.429379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.429395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.437937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.438154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.438170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.446817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.447046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.447062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.455628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.455860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.455876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.464518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.464762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.464779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.473389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.473619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.473636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.482315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.482541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.482557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.491287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.491546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.491567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.500183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.500442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.500459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.509083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.509299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.509315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.517911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.518130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.150 [2024-11-15 11:53:01.518146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.150 [2024-11-15 11:53:01.526808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.150 [2024-11-15 11:53:01.527034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.527050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.535661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.535890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.535906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.544548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.544803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.544820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.553392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.553616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.553632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.562197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.562405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:6970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.562421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.571116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.571352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.571368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.579981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.580238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.580255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.588876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.589100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:15999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.589116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.597744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.597946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.597962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.606602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.606888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.606904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.615481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.615730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.615747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.624314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.624575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.624591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.633219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.633328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:25476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.633344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.151 [2024-11-15 11:53:01.642069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.151 [2024-11-15 11:53:01.642284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.151 [2024-11-15 11:53:01.642302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.650952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.651220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.651236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.659803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.660017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.660033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.668652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.668905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.668922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.677656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.677899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.677915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.686471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.686675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.686691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.695295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.695565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.695581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.704149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.704417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.412 [2024-11-15 11:53:01.704433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.412 [2024-11-15 11:53:01.713009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.412 [2024-11-15 11:53:01.713227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.713243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.721834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.722090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.722107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.730713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.730950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.730966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.739639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.739844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:23970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.739861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.748494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.748748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.748764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.757415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.757671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.757687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.766334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.766534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.766550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.775175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.775398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.775414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.784046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.784256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.784272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.792904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.793141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.793157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.801773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.802017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.802033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.810638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.810870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.810885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.819541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.819825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.819842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.828454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.828651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.828667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.837334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.837537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.837554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.846191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.846437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.846453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.855115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.855366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.855383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.864029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.864233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.864250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.872863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.873073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.873091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.881830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.882085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.882101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.890674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.890918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:4852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.890934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.899522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.413 [2024-11-15 11:53:01.899741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.413 [2024-11-15 11:53:01.899757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.413 [2024-11-15 11:53:01.908417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0520) with pdu=0x200016edece0 00:29:36.674 [2024-11-15 11:53:01.908650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.674 [2024-11-15 11:53:01.908668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.674 29103.00 IOPS, 113.68 MiB/s 00:29:36.674 Latency(us) 00:29:36.674 [2024-11-15T10:53:02.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.674 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.674 nvme0n1 : 2.01 29105.62 113.69 0.00 0.00 4390.32 2198.19 14090.24 00:29:36.674 [2024-11-15T10:53:02.172Z] =================================================================================================================== 00:29:36.675 [2024-11-15T10:53:02.173Z] Total : 29105.62 113.69 0.00 0.00 4390.32 2198.19 14090.24 00:29:36.675 { 00:29:36.675 "results": [ 00:29:36.675 { 00:29:36.675 "job": "nvme0n1", 00:29:36.675 "core_mask": "0x2", 00:29:36.675 "workload": "randwrite", 00:29:36.675 "status": "finished", 00:29:36.675 "queue_depth": 128, 00:29:36.675 "io_size": 4096, 00:29:36.675 "runtime": 2.005317, 00:29:36.675 "iops": 29105.62270204661, 00:29:36.675 "mibps": 113.69383867986957, 00:29:36.675 "io_failed": 0, 00:29:36.675 "io_timeout": 0, 00:29:36.675 "avg_latency_us": 4390.31745331186, 00:29:36.675 "min_latency_us": 2198.1866666666665, 00:29:36.675 "max_latency_us": 14090.24 00:29:36.675 } 00:29:36.675 ], 00:29:36.675 "core_count": 1 00:29:36.675 } 00:29:36.675 11:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.675 11:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.675 11:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.675 | .driver_specific 00:29:36.675 | .nvme_error 00:29:36.675 | .status_code 00:29:36.675 | .command_transient_transport_error' 00:29:36.675 11:53:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 228 > 0 )) 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1250587 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1250587 ']' 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1250587 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:36.675 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1250587 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1250587' 00:29:36.936 killing process with pid 1250587 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1250587 00:29:36.936 Received shutdown signal, test time was about 2.000000 seconds 00:29:36.936 00:29:36.936 Latency(us) 00:29:36.936 [2024-11-15T10:53:02.434Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.936 [2024-11-15T10:53:02.434Z] =================================================================================================================== 00:29:36.936 [2024-11-15T10:53:02.434Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1250587 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:36.936 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1251444 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1251444 /var/tmp/bperf.sock 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 1251444 ']' 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:36.937 11:53:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.937 [2024-11-15 11:53:02.358360] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:36.937 [2024-11-15 11:53:02.358418] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1251444 ] 00:29:36.937 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:36.937 Zero copy mechanism will not be used. 00:29:37.197 [2024-11-15 11:53:02.441644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.197 [2024-11-15 11:53:02.470797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.767 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:37.767 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:37.767 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.767 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.028 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:38.028 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.028 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.028 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.028 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.028 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.289 nvme0n1 00:29:38.289 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:38.289 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.289 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.289 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:38.289 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.289 11:53:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.550 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.550 Zero copy mechanism will not be used. 00:29:38.550 Running I/O for 2 seconds... 00:29:38.550 [2024-11-15 11:53:03.865109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.865175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.865206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.869191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.869245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.869271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.872961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.873022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.873045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.876583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.876681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.876699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.883251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.883456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.883475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.886683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.886907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.886926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.890309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.890498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.890516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.894606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.894813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.894832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.900531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.900747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.900766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.904130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.904340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.904359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.907800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.907996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.908015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.550 [2024-11-15 11:53:03.911577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.550 [2024-11-15 11:53:03.911782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.550 [2024-11-15 11:53:03.911801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.915625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.915843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.915861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.921674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.921770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.921789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.928329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.928542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.928560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.937937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.938194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.938213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.947937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.948130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.948149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.958724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.959046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.959064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.968919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.969269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.969288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.977992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.978260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.978278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.987332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.987625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.987646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:03.996768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:03.997029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:03.997047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:04.006487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:04.006802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:04.006820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:04.015782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:04.016030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:04.016048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:04.024103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:04.024400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:04.024418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:04.032287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:04.032588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:04.032607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.551 [2024-11-15 11:53:04.038609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.551 [2024-11-15 11:53:04.038887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.551 [2024-11-15 11:53:04.038905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.046864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.047071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.047090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.055740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.056005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.056023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.062852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.063147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.063165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.070616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.070943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.070963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.079734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.080039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.080058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.090088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.090359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.090379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.100199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.100547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.100574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.110409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.110652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.110671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.121171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.121456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.121475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.131732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.132016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.132035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.142317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.142601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.142620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.151904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.152144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.152162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.162189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.162513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.162531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.172939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.173226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.183473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.183774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.183793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.813 [2024-11-15 11:53:04.193608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.813 [2024-11-15 11:53:04.193830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.813 [2024-11-15 11:53:04.193849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.200898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.201275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.201294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.205141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.205299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.205318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.211026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.211183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.211201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.218127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.218287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.218309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.224299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.224590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.224608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.233294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.233606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.233626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.240909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.241187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.241206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.244592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.244753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.244771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.250511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.250828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.250846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.256924] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.257000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.257016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.260701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.260781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.260797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.268244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.268491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.268508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.278604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.278906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.278923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.288372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.288587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.288605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.298853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.299122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.299139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:38.814 [2024-11-15 11:53:04.308346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:38.814 [2024-11-15 11:53:04.308578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.814 [2024-11-15 11:53:04.308596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.319188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.319519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.077 [2024-11-15 11:53:04.319536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.328211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.328443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.077 [2024-11-15 11:53:04.328461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.332889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.332949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.077 [2024-11-15 11:53:04.332970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.335834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.335901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.077 [2024-11-15 11:53:04.335921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.338598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.338655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.077 [2024-11-15 11:53:04.338675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.341295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.341363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.077 [2024-11-15 11:53:04.341383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.077 [2024-11-15 11:53:04.344034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.077 [2024-11-15 11:53:04.344091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.344113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.346748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.346807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.346827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.349469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.349538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.349555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.352227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.352284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.352304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.354794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.354853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.354873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.357491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.357553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.357578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.362491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.362588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.362606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.366849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.366908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.366932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.370888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.370954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.370973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.373710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.373769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.373788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.376259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.376318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.376340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.378805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.378910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.378930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.382316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.382427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.382445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.391381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.391631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.391648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.397073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.397214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.397232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.400509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.400646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.400665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.403391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.403470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.403488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.406035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.406107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.406125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.408618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.408691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.408709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.411130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.411202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.411219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.413658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.413731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.413750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.416191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.416262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.416280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.418820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.418891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.418909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.421339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.421411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.421428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.424024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.424109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.424126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.430282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.430553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.430578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.436084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.436175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.078 [2024-11-15 11:53:04.436193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.078 [2024-11-15 11:53:04.441988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.078 [2024-11-15 11:53:04.442061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.442078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.449722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.449953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.449971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.459017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.459260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.459277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.468901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.469205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.469222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.479025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.479257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.479275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.488770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.488974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.488992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.499163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.499445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.499465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.509363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.509596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.509613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.519418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.519515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.519532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.529697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.529923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.529941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.539078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.539396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.539413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.544191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.544250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.544271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.547908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.547968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.547988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.551100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.551159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.551180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.554411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.554469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.557499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.557571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.557591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.560531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.560599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.560618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.563150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.563209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.563229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.565704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.565762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.565782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.568290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.568354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.568373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.079 [2024-11-15 11:53:04.571017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.079 [2024-11-15 11:53:04.571074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.079 [2024-11-15 11:53:04.571094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.573621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.573691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.573710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.581343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.581416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.581434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.584041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.584111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.584130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.587001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.587068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.587087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.590069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.590183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.590201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.595273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.595551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.595575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.604023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.604113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.604129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.608209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.608339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.608355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.616567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.616853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.616869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.619703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.619775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.619795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.622606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.622664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.622684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.625448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.625529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.625552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.628180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.628253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.628273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.631119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.631186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.631206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.633692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.633763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.633782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.638485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.638574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.638592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.341 [2024-11-15 11:53:04.641714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.341 [2024-11-15 11:53:04.641785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.341 [2024-11-15 11:53:04.641804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.644229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.644297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.644315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.647401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.647520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.647537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.654400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.654598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.654615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.663658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.663969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.663986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.672060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.672118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.672140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.675895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.675974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.675991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.681765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.681836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.681854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.689512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.689584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.689603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.693451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.693511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.693531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.699158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.699231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.699250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.704674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.704746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.704764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.707952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.708012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.708032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.710734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.710808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.710826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.713559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.713623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.713644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.716437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.716497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.716515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.719147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.719252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.719269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.721790] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.721848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.721868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.724683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.724790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.724806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.728023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.728088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.728109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.732240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.732314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.732331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.737769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.738072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.738092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.742407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.742466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.742485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.746673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.746731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.746751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.753687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.753888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.753905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.762065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.762306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.762322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.769629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.769705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.769725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.774946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.342 [2024-11-15 11:53:04.775023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.342 [2024-11-15 11:53:04.775039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.342 [2024-11-15 11:53:04.778226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.778296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.778316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.782648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.782717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.782736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.785912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.785987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.786005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.789082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.789154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.789173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.792988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.793057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.793075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.796664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.796734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.796751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.800544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.800620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.800640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.804181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.804265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.804282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.808283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.808381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.808399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.813626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.813698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.813716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.817681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.817773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.817791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.821371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.821460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.821477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.826711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.826997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.827015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.832626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.832674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.832694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.343 [2024-11-15 11:53:04.835758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.343 [2024-11-15 11:53:04.835843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.343 [2024-11-15 11:53:04.835861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.839680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.839921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.839938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.846984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.847032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.847051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.855211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.855274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.855293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.605 5398.00 IOPS, 674.75 MiB/s [2024-11-15T10:53:05.103Z] [2024-11-15 11:53:04.863762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.863965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.863982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.870036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.870105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.870128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.874175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.874266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.874286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.880278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.880471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.880488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.884345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.884420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.884441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.887177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.887240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.887262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.889804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.889876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.889895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.892622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.892684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.892705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.895303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.895361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.895379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.897900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.897987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.898004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.900470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.900529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.605 [2024-11-15 11:53:04.900550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.605 [2024-11-15 11:53:04.903016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.605 [2024-11-15 11:53:04.903072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.903093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.905542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.905601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.905622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.908039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.908085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.908104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.910519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.910572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.910591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.913121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.913169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.913187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.916323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.916379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.916398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.919063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.919117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.919135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.921547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.921611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.921632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.924042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.924100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.924120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.928362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.928572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.928589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.934136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.934203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.934221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.936735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.936784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.936803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.939274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.939319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.939338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.941901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.941974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.941990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.945247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.945579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.945595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.948555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.948637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.948654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.951244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.951295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.951324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.953781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.953836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.953855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.956319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.956363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.956383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.958841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.958886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.958905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.961351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.961411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.961431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.963859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.963910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.963930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.967845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.967889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.967908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.974275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.974324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.974342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.977301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.977346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.977365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.980687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.980735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.980755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.983232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.983315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.983331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.986513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.986629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.606 [2024-11-15 11:53:04.986646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.606 [2024-11-15 11:53:04.989200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.606 [2024-11-15 11:53:04.989259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:04.989279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:04.991740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:04.991798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:04.991816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:04.994364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:04.994410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:04.994429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.000923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.001002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.001018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.006811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.007036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.007052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.012390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.012470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.012486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.015229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.015291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.015311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.020096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.020217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.020233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.022897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.022979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.022996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.025460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.025521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.025542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.028009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.028082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.028103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.031316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.031411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.031428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.034091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.034150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.034170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.036648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.036706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.036727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.039206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.039263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.039287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.042064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.042140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.042158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.047613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.047726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.047744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.050375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.050419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.050434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.053149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.053197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.053213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.055663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.055737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.055752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.058193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.058239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.058254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.060724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.060776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.060791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.063933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.064015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.064034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.071516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.071803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.071819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.080934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.081245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.081261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.607 [2024-11-15 11:53:05.091011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.607 [2024-11-15 11:53:05.091365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.607 [2024-11-15 11:53:05.091381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.870 [2024-11-15 11:53:05.101379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.870 [2024-11-15 11:53:05.101713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.870 [2024-11-15 11:53:05.101730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.870 [2024-11-15 11:53:05.110320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.870 [2024-11-15 11:53:05.110579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.870 [2024-11-15 11:53:05.110595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.870 [2024-11-15 11:53:05.119814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.870 [2024-11-15 11:53:05.120033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.870 [2024-11-15 11:53:05.120050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.870 [2024-11-15 11:53:05.129844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.870 [2024-11-15 11:53:05.130090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.870 [2024-11-15 11:53:05.130105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.870 [2024-11-15 11:53:05.140626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.870 [2024-11-15 11:53:05.140900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.870 [2024-11-15 11:53:05.140915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.870 [2024-11-15 11:53:05.148959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.149004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.149020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.153267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.153314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.157088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.157131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.157147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.160633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.160691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.160708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.164820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.164867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.164883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.168476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.168537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.168553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.171606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.171674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.171689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.174608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.174695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.174710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.177600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.177656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.177671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.180199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.180259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.180277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.182794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.182858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.182873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.185393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.185444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.185461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.188677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.188762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.188778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.191595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.191663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.191679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.194121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.194177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.194192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.197160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.197245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.197264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.203122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.203420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.203436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.212164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.212407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.212425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.221369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.221584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.221600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.231803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.232078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.232093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.242515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.242731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.242746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.249426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.249696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.249712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.253476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.253597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.253615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.256848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.256945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.256964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.260440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.260494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.260509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.263318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.263374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.263389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.266236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.266296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.266311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.269355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.269430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.269446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.273654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.273752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.273768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.281392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.281454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.281470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.287734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.287826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.287841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.294742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.871 [2024-11-15 11:53:05.294804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.871 [2024-11-15 11:53:05.294820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.871 [2024-11-15 11:53:05.301907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.301976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.301992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.305506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.305590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.305605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.310330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.310392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.310409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.314528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.314600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.314618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.318967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.319100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.319115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.322544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.322678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.322698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.326062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.326151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.326167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.331540] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.331620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.331636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.334670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.334735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.334752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.337729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.337798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.337813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.340855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.340953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.340968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.348590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.348902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.348918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:39.872 [2024-11-15 11:53:05.357897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:39.872 [2024-11-15 11:53:05.358111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.872 [2024-11-15 11:53:05.358128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.366048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.366110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.366126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.373821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.374084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.374100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.382481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.382705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.382721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.391890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.392138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.392153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.401453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.401702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.401718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.410127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.410394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.410411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.415305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.415377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.415392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.419047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.419091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.419110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.424530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.424757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.424773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.428431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.428506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.428522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.432320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.432368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.432384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.435713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.435799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.435814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.438988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.439033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.439048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.442680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.442775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.442790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.446277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.446339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.446354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.450034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.450098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.450114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.453397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.453470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.453489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.457909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.457962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.457978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.462260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.462305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.462321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.466899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.466994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.467010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.471161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.471376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.471394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.479064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.479145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.479161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.486974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.487020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.487035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.490760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.134 [2024-11-15 11:53:05.490855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.134 [2024-11-15 11:53:05.490870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.134 [2024-11-15 11:53:05.495174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.495521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.495537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.499252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.499304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.499320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.502962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.503054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.503070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.506865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.506924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.506939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.512809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.513058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.513075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.519427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.519488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.519503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.525342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.525411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.525427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.531937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.531982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.531998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.536636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.536728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.536744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.539981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.540031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.540047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.544628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.544956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.544972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.549671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.549729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.549744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.553521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.553585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.553601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.559577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.559798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.559815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.566470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.566697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.566717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.570420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.570468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.570484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.574012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.574090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.574105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.577335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.577408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.577423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.581989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.582040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.582058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.586289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.586380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.586395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.589977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.590081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.590096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.594013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.594114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.594129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.601120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.601166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.601181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.605180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.605228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.605243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.611062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.611143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.611159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.615687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.615746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.615762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.619885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.135 [2024-11-15 11:53:05.619929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.135 [2024-11-15 11:53:05.619944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.135 [2024-11-15 11:53:05.624538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.136 [2024-11-15 11:53:05.624652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.136 [2024-11-15 11:53:05.624668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.629581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.629642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.629658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.635845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.636168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.636184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.641007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.641064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.641079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.644717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.644769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.644785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.648510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.648579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.648595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.652236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.652297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.652312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.655934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.655998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.656014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.660768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.660887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.660902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.665853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.665937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.665953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.674105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.674335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.674351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.679665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.679732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.679748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.685708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.685998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.686014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.692968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.397 [2024-11-15 11:53:05.693066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.397 [2024-11-15 11:53:05.693082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.397 [2024-11-15 11:53:05.699815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.699891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.699906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.703844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.703950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.703965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.710059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.710301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.710317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.719539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.719808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.719827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.727892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.728212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.728229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.738019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.738268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.738283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.747983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.748201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.748217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.757452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.757632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.757649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.767732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.767994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.768011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.774342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.774530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.774546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.778511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.778594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.778610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.782453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.782551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.782571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.788846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.789057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.789073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.795598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.795669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.795685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.799817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.799885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.799900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.806034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.806154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.806170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.812311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.812357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.812373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.817372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.817433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.817450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.821932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.822150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.822165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.825997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.826074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.826089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.829859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.829935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.829950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.833877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.833938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.833954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.841423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.841528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.841543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.850331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.850429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.850444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:40.398 [2024-11-15 11:53:05.854691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.854871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.854886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:40.398 5888.00 IOPS, 736.00 MiB/s [2024-11-15T10:53:05.896Z] [2024-11-15 11:53:05.860335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1ef0860) with pdu=0x200016eff3c8 00:29:40.398 [2024-11-15 11:53:05.860580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.398 [2024-11-15 11:53:05.860595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:40.398 00:29:40.398 Latency(us) 00:29:40.398 [2024-11-15T10:53:05.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.398 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:40.398 nvme0n1 : 2.00 5884.04 735.50 0.00 0.00 2714.97 1201.49 13216.43 00:29:40.398 [2024-11-15T10:53:05.897Z] =================================================================================================================== 00:29:40.399 [2024-11-15T10:53:05.897Z] Total : 5884.04 735.50 0.00 0.00 2714.97 1201.49 13216.43 00:29:40.399 { 00:29:40.399 "results": [ 00:29:40.399 { 00:29:40.399 "job": "nvme0n1", 00:29:40.399 "core_mask": "0x2", 00:29:40.399 "workload": "randwrite", 00:29:40.399 "status": "finished", 00:29:40.399 "queue_depth": 16, 00:29:40.399 "io_size": 131072, 00:29:40.399 "runtime": 2.004746, 00:29:40.399 "iops": 5884.0371797724, 00:29:40.399 "mibps": 735.50464747155, 00:29:40.399 "io_failed": 0, 00:29:40.399 "io_timeout": 0, 00:29:40.399 "avg_latency_us": 2714.9692686786484, 00:29:40.399 "min_latency_us": 1201.4933333333333, 00:29:40.399 "max_latency_us": 13216.426666666666 00:29:40.399 } 00:29:40.399 ], 00:29:40.399 "core_count": 1 00:29:40.399 } 00:29:40.399 11:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.399 11:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.399 11:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.399 | .driver_specific 00:29:40.399 | .nvme_error 00:29:40.399 | .status_code 00:29:40.399 | .command_transient_transport_error' 00:29:40.399 11:53:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 381 > 0 )) 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1251444 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1251444 ']' 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1251444 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1251444 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1251444' 00:29:40.658 killing process with pid 1251444 00:29:40.658 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1251444 00:29:40.658 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.658 00:29:40.658 Latency(us) 00:29:40.658 [2024-11-15T10:53:06.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.658 [2024-11-15T10:53:06.156Z] =================================================================================================================== 00:29:40.658 [2024-11-15T10:53:06.156Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.659 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1251444 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1248954 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 1248954 ']' 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 1248954 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1248954 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1248954' 00:29:40.919 killing process with pid 1248954 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 1248954 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 1248954 00:29:40.919 00:29:40.919 real 0m16.607s 00:29:40.919 user 0m32.848s 00:29:40.919 sys 0m3.624s 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:40.919 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.919 ************************************ 00:29:40.919 END TEST nvmf_digest_error 00:29:40.919 ************************************ 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:41.180 rmmod nvme_tcp 00:29:41.180 rmmod nvme_fabrics 00:29:41.180 rmmod nvme_keyring 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1248954 ']' 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1248954 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 1248954 ']' 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 1248954 00:29:41.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1248954) - No such process 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 1248954 is not found' 00:29:41.180 Process with pid 1248954 is not found 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.180 11:53:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:43.726 00:29:43.726 real 0m43.626s 00:29:43.726 user 1m8.663s 00:29:43.726 sys 0m13.143s 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:43.726 ************************************ 00:29:43.726 END TEST nvmf_digest 00:29:43.726 ************************************ 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:43.726 ************************************ 00:29:43.726 START TEST nvmf_bdevperf 00:29:43.726 ************************************ 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:43.726 * Looking for test storage... 00:29:43.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:43.726 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.727 --rc genhtml_branch_coverage=1 00:29:43.727 --rc genhtml_function_coverage=1 00:29:43.727 --rc genhtml_legend=1 00:29:43.727 --rc geninfo_all_blocks=1 00:29:43.727 --rc geninfo_unexecuted_blocks=1 00:29:43.727 00:29:43.727 ' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.727 --rc genhtml_branch_coverage=1 00:29:43.727 --rc genhtml_function_coverage=1 00:29:43.727 --rc genhtml_legend=1 00:29:43.727 --rc geninfo_all_blocks=1 00:29:43.727 --rc geninfo_unexecuted_blocks=1 00:29:43.727 00:29:43.727 ' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.727 --rc genhtml_branch_coverage=1 00:29:43.727 --rc genhtml_function_coverage=1 00:29:43.727 --rc genhtml_legend=1 00:29:43.727 --rc geninfo_all_blocks=1 00:29:43.727 --rc geninfo_unexecuted_blocks=1 00:29:43.727 00:29:43.727 ' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:43.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:43.727 --rc genhtml_branch_coverage=1 00:29:43.727 --rc genhtml_function_coverage=1 00:29:43.727 --rc genhtml_legend=1 00:29:43.727 --rc geninfo_all_blocks=1 00:29:43.727 --rc geninfo_unexecuted_blocks=1 00:29:43.727 00:29:43.727 ' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:43.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:43.727 11:53:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:52.051 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:52.051 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:52.051 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:52.051 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:52.051 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:52.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.504 ms 00:29:52.051 00:29:52.051 --- 10.0.0.2 ping statistics --- 00:29:52.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.052 rtt min/avg/max/mdev = 0.504/0.504/0.504/0.000 ms 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:29:52.052 00:29:52.052 --- 10.0.0.1 ping statistics --- 00:29:52.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.052 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1256395 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1256395 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1256395 ']' 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:52.052 11:53:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 [2024-11-15 11:53:16.593450] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:52.052 [2024-11-15 11:53:16.593520] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.052 [2024-11-15 11:53:16.694402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:52.052 [2024-11-15 11:53:16.746785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.052 [2024-11-15 11:53:16.746837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.052 [2024-11-15 11:53:16.746845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.052 [2024-11-15 11:53:16.746853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.052 [2024-11-15 11:53:16.746859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.052 [2024-11-15 11:53:16.748735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.052 [2024-11-15 11:53:16.748981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:52.052 [2024-11-15 11:53:16.748982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 [2024-11-15 11:53:17.469775] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 Malloc0 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.052 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.313 [2024-11-15 11:53:17.549715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:52.313 { 00:29:52.313 "params": { 00:29:52.313 "name": "Nvme$subsystem", 00:29:52.313 "trtype": "$TEST_TRANSPORT", 00:29:52.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:52.313 "adrfam": "ipv4", 00:29:52.313 "trsvcid": "$NVMF_PORT", 00:29:52.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:52.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:52.313 "hdgst": ${hdgst:-false}, 00:29:52.313 "ddgst": ${ddgst:-false} 00:29:52.313 }, 00:29:52.313 "method": "bdev_nvme_attach_controller" 00:29:52.313 } 00:29:52.313 EOF 00:29:52.313 )") 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:52.313 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:52.314 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:52.314 11:53:17 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:52.314 "params": { 00:29:52.314 "name": "Nvme1", 00:29:52.314 "trtype": "tcp", 00:29:52.314 "traddr": "10.0.0.2", 00:29:52.314 "adrfam": "ipv4", 00:29:52.314 "trsvcid": "4420", 00:29:52.314 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:52.314 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:52.314 "hdgst": false, 00:29:52.314 "ddgst": false 00:29:52.314 }, 00:29:52.314 "method": "bdev_nvme_attach_controller" 00:29:52.314 }' 00:29:52.314 [2024-11-15 11:53:17.609775] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:52.314 [2024-11-15 11:53:17.609845] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256502 ] 00:29:52.314 [2024-11-15 11:53:17.704438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:52.314 [2024-11-15 11:53:17.757387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.886 Running I/O for 1 seconds... 00:29:53.828 9049.00 IOPS, 35.35 MiB/s 00:29:53.828 Latency(us) 00:29:53.828 [2024-11-15T10:53:19.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:53.828 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:53.828 Verification LBA range: start 0x0 length 0x4000 00:29:53.828 Nvme1n1 : 1.01 9116.58 35.61 0.00 0.00 13980.75 2990.08 12342.61 00:29:53.828 [2024-11-15T10:53:19.326Z] =================================================================================================================== 00:29:53.828 [2024-11-15T10:53:19.326Z] Total : 9116.58 35.61 0.00 0.00 13980.75 2990.08 12342.61 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1256837 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:53.828 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:53.828 { 00:29:53.828 "params": { 00:29:53.828 "name": "Nvme$subsystem", 00:29:53.828 "trtype": "$TEST_TRANSPORT", 00:29:53.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.828 "adrfam": "ipv4", 00:29:53.828 "trsvcid": "$NVMF_PORT", 00:29:53.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.828 "hdgst": ${hdgst:-false}, 00:29:53.828 "ddgst": ${ddgst:-false} 00:29:53.828 }, 00:29:53.829 "method": "bdev_nvme_attach_controller" 00:29:53.829 } 00:29:53.829 EOF 00:29:53.829 )") 00:29:53.829 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:53.829 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:53.829 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:53.829 11:53:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:53.829 "params": { 00:29:53.829 "name": "Nvme1", 00:29:53.829 "trtype": "tcp", 00:29:53.829 "traddr": "10.0.0.2", 00:29:53.829 "adrfam": "ipv4", 00:29:53.829 "trsvcid": "4420", 00:29:53.829 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.829 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.829 "hdgst": false, 00:29:53.829 "ddgst": false 00:29:53.829 }, 00:29:53.829 "method": "bdev_nvme_attach_controller" 00:29:53.829 }' 00:29:53.829 [2024-11-15 11:53:19.281519] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:29:53.829 [2024-11-15 11:53:19.281583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256837 ] 00:29:54.090 [2024-11-15 11:53:19.371346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.090 [2024-11-15 11:53:19.405709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.351 Running I/O for 15 seconds... 00:29:56.236 10241.00 IOPS, 40.00 MiB/s [2024-11-15T10:53:22.308Z] 10210.50 IOPS, 39.88 MiB/s [2024-11-15T10:53:22.308Z] 11:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1256395 00:29:56.810 11:53:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:56.810 [2024-11-15 11:53:22.243057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.810 [2024-11-15 11:53:22.243099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.810 [2024-11-15 11:53:22.243119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.810 [2024-11-15 11:53:22.243130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.810 [2024-11-15 11:53:22.243141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.810 [2024-11-15 11:53:22.243149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.810 [2024-11-15 11:53:22.243159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:84496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.810 [2024-11-15 11:53:22.243166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.810 [2024-11-15 11:53:22.243176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:84512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:84568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:84576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:84216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.811 [2024-11-15 11:53:22.243454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:84632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:84640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:84656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:84680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:84688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:84728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:84768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:84776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:84784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.811 [2024-11-15 11:53:22.243960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.811 [2024-11-15 11:53:22.243967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.243976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.243984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.243993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.812 [2024-11-15 11:53:22.244574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.812 [2024-11-15 11:53:22.244583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.244841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:84232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:84248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:84256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:84280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.244984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.244994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:84296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:84304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:84312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:84328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:84344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:84352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:84360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:84392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.813 [2024-11-15 11:53:22.245237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:56.813 [2024-11-15 11:53:22.245253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.813 [2024-11-15 11:53:22.245263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:84408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:84416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:84432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:84448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:56.814 [2024-11-15 11:53:22.245371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.245379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bf1b0 is same with the state(6) to be set 00:29:56.814 [2024-11-15 11:53:22.245389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:56.814 [2024-11-15 11:53:22.245395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:56.814 [2024-11-15 11:53:22.245401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84464 len:8 PRP1 0x0 PRP2 0x0 00:29:56.814 [2024-11-15 11:53:22.245412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:56.814 [2024-11-15 11:53:22.248997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.814 [2024-11-15 11:53:22.249049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:56.814 [2024-11-15 11:53:22.249894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-11-15 11:53:22.249932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:56.814 [2024-11-15 11:53:22.249943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:56.814 [2024-11-15 11:53:22.250184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:56.814 [2024-11-15 11:53:22.250406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.814 [2024-11-15 11:53:22.250416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.814 [2024-11-15 11:53:22.250424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.814 [2024-11-15 11:53:22.250433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.814 [2024-11-15 11:53:22.263133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.814 [2024-11-15 11:53:22.263792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-11-15 11:53:22.263830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:56.814 [2024-11-15 11:53:22.263841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:56.814 [2024-11-15 11:53:22.264081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:56.814 [2024-11-15 11:53:22.264303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.814 [2024-11-15 11:53:22.264312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.814 [2024-11-15 11:53:22.264320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.814 [2024-11-15 11:53:22.264329] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.814 [2024-11-15 11:53:22.277023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.814 [2024-11-15 11:53:22.277698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-11-15 11:53:22.277738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:56.814 [2024-11-15 11:53:22.277749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:56.814 [2024-11-15 11:53:22.277988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:56.814 [2024-11-15 11:53:22.278211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.814 [2024-11-15 11:53:22.278220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.814 [2024-11-15 11:53:22.278228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.814 [2024-11-15 11:53:22.278236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:56.814 [2024-11-15 11:53:22.290946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:56.814 [2024-11-15 11:53:22.291607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.814 [2024-11-15 11:53:22.291648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:56.814 [2024-11-15 11:53:22.291661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:56.814 [2024-11-15 11:53:22.291902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:56.814 [2024-11-15 11:53:22.292124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:56.814 [2024-11-15 11:53:22.292133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:56.814 [2024-11-15 11:53:22.292141] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:56.814 [2024-11-15 11:53:22.292148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.076 [2024-11-15 11:53:22.304860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.076 [2024-11-15 11:53:22.305539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.305589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.305601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.305848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.306070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.306080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.306088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.306096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.318806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.319468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.319510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.319522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.319775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.319998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.320007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.320015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.320023] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.332711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.333341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.333385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.333397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.333650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.333875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.333883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.333892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.333900] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.346593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.347239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.347283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.347294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.347537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.347771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.347786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.347794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.347802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.360498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.361101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.361124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.361132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.361352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.361578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.361587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.361594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.361601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.374285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.374743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.374766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.374774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.374994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.375213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.375222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.375229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.375237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.388158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.388834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.388886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.388898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.389146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.389370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.389379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.389387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.389401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.402126] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.402775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.402788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.403040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.403263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.403273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.403282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.403290] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.416028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.416747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.416802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.416814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.417067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.417291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.417300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.417308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.077 [2024-11-15 11:53:22.417317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.077 [2024-11-15 11:53:22.429831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.077 [2024-11-15 11:53:22.430571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.077 [2024-11-15 11:53:22.430630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.077 [2024-11-15 11:53:22.430642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.077 [2024-11-15 11:53:22.430895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.077 [2024-11-15 11:53:22.431120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.077 [2024-11-15 11:53:22.431130] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.077 [2024-11-15 11:53:22.431138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.431146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.443661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.444372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.444431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.444443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.444710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.444936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.444945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.444953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.444962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.457490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.458191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.458253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.458267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.458524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.458768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.458778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.458787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.458796] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.471337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.472054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.472117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.472130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.472386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.472625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.472636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.472646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.472655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.485176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.485913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.485976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.485989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.486253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.486478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.486487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.486496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.486505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.499089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.499793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.499857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.499870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.500127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.500354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.500363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.500373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.500384] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.512955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.513480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.513508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.513517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.513750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.513972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.513981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.513989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.513996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.526775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.527439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.527502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.527515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.527787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.528014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.528033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.528042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.528051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.540607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.541195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.541224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.541232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.541455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.541686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.541704] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.541712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.541720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.554550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.555219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.555281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.555294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.555550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.555792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.078 [2024-11-15 11:53:22.555803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.078 [2024-11-15 11:53:22.555811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.078 [2024-11-15 11:53:22.555820] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.078 [2024-11-15 11:53:22.568553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.078 [2024-11-15 11:53:22.569254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.078 [2024-11-15 11:53:22.569318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.078 [2024-11-15 11:53:22.569331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.078 [2024-11-15 11:53:22.569604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.078 [2024-11-15 11:53:22.569831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.079 [2024-11-15 11:53:22.569842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.079 [2024-11-15 11:53:22.569851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.079 [2024-11-15 11:53:22.569872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.341 [2024-11-15 11:53:22.582412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.341 [2024-11-15 11:53:22.583053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.341 [2024-11-15 11:53:22.583117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.341 [2024-11-15 11:53:22.583130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.341 [2024-11-15 11:53:22.583386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.341 [2024-11-15 11:53:22.583628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.341 [2024-11-15 11:53:22.583639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.341 [2024-11-15 11:53:22.583647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.341 [2024-11-15 11:53:22.583656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.341 [2024-11-15 11:53:22.596396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.341 [2024-11-15 11:53:22.596993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.341 [2024-11-15 11:53:22.597023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.341 [2024-11-15 11:53:22.597032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.341 [2024-11-15 11:53:22.597255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.341 [2024-11-15 11:53:22.597474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.341 [2024-11-15 11:53:22.597483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.341 [2024-11-15 11:53:22.597492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.341 [2024-11-15 11:53:22.597500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.341 [2024-11-15 11:53:22.610221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.341 [2024-11-15 11:53:22.610878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.341 [2024-11-15 11:53:22.610941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.341 [2024-11-15 11:53:22.610954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.341 [2024-11-15 11:53:22.611211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.341 [2024-11-15 11:53:22.611438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.341 [2024-11-15 11:53:22.611447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.341 [2024-11-15 11:53:22.611455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.341 [2024-11-15 11:53:22.611464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.341 8970.33 IOPS, 35.04 MiB/s [2024-11-15T10:53:22.839Z] [2024-11-15 11:53:22.624021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.341 [2024-11-15 11:53:22.624728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.624791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.624804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.625060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.625286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.625295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.625304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.625313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.637836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.638514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.638588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.638601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.638858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.639083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.639093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.639101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.639110] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.651830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.652428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.652457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.652466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.652697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.652919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.652929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.652937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.652945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.665652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.666337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.666399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.666419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.666691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.666918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.666928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.666936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.666945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.679457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.680149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.680211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.680223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.680480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.680722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.680733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.680742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.680751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.693273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.693963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.694027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.694040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.694296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.694522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.694531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.694540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.694548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.707073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.707645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.707675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.707684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.707908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.708128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.708145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.708153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.708161] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.720881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.721519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.721593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.721607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.721863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.722089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.722099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.722107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.722116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.734841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.735432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.735494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.735506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.735779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.736005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.736014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.736024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.736033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.748744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.749442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.342 [2024-11-15 11:53:22.749505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.342 [2024-11-15 11:53:22.749518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.342 [2024-11-15 11:53:22.749787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.342 [2024-11-15 11:53:22.750015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.342 [2024-11-15 11:53:22.750027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.342 [2024-11-15 11:53:22.750037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.342 [2024-11-15 11:53:22.750055] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.342 [2024-11-15 11:53:22.762615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.342 [2024-11-15 11:53:22.763305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.343 [2024-11-15 11:53:22.763368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.343 [2024-11-15 11:53:22.763381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.343 [2024-11-15 11:53:22.763651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.343 [2024-11-15 11:53:22.763881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.343 [2024-11-15 11:53:22.763891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.343 [2024-11-15 11:53:22.763900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.343 [2024-11-15 11:53:22.763910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.343 [2024-11-15 11:53:22.776437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.343 [2024-11-15 11:53:22.777120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.343 [2024-11-15 11:53:22.777183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.343 [2024-11-15 11:53:22.777196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.343 [2024-11-15 11:53:22.777452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.343 [2024-11-15 11:53:22.777693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.343 [2024-11-15 11:53:22.777703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.343 [2024-11-15 11:53:22.777712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.343 [2024-11-15 11:53:22.777720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.343 [2024-11-15 11:53:22.790456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.343 [2024-11-15 11:53:22.791230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.343 [2024-11-15 11:53:22.791294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.343 [2024-11-15 11:53:22.791307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.343 [2024-11-15 11:53:22.791578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.343 [2024-11-15 11:53:22.791805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.343 [2024-11-15 11:53:22.791815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.343 [2024-11-15 11:53:22.791824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.343 [2024-11-15 11:53:22.791833] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.343 [2024-11-15 11:53:22.804332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.343 [2024-11-15 11:53:22.805058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.343 [2024-11-15 11:53:22.805120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.343 [2024-11-15 11:53:22.805133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.343 [2024-11-15 11:53:22.805389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.343 [2024-11-15 11:53:22.805633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.343 [2024-11-15 11:53:22.805643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.343 [2024-11-15 11:53:22.805652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.343 [2024-11-15 11:53:22.805661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.343 [2024-11-15 11:53:22.818183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.343 [2024-11-15 11:53:22.818862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.343 [2024-11-15 11:53:22.818924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.343 [2024-11-15 11:53:22.818937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.343 [2024-11-15 11:53:22.819194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.343 [2024-11-15 11:53:22.819420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.343 [2024-11-15 11:53:22.819429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.343 [2024-11-15 11:53:22.819437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.343 [2024-11-15 11:53:22.819447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.343 [2024-11-15 11:53:22.832184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.343 [2024-11-15 11:53:22.832778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.343 [2024-11-15 11:53:22.832809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.343 [2024-11-15 11:53:22.832818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.343 [2024-11-15 11:53:22.833040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.343 [2024-11-15 11:53:22.833260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.343 [2024-11-15 11:53:22.833270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.343 [2024-11-15 11:53:22.833278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.343 [2024-11-15 11:53:22.833285] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.605 [2024-11-15 11:53:22.846024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.605 [2024-11-15 11:53:22.846590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.605 [2024-11-15 11:53:22.846616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.605 [2024-11-15 11:53:22.846633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.605 [2024-11-15 11:53:22.846854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.847074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.847084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.847091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.847099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.859823] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.860523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.860598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.860611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.860867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.861093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.861103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.861111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.861120] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.873657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.874279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.874341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.874354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.874625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.874853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.874862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.874871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.874880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.887617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.888252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.888314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.888327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.888599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.888827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.888843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.888851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.888860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.901583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.902284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.902347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.902359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.902631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.902859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.902868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.902877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.902886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.915408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.916085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.916148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.916161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.916417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.916673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.916685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.916693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.916702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.929213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.929901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.929963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.929976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.930232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.930458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.930467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.930477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.930493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.943043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.943694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.943758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.943771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.944027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.944252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.944261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.944270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.944279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.957015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.957674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.957738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.957751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.958007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.958233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.958242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.958251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.958260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.971001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.971527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.971556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.606 [2024-11-15 11:53:22.971575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.606 [2024-11-15 11:53:22.971798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.606 [2024-11-15 11:53:22.972019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.606 [2024-11-15 11:53:22.972028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.606 [2024-11-15 11:53:22.972035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.606 [2024-11-15 11:53:22.972043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.606 [2024-11-15 11:53:22.984948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.606 [2024-11-15 11:53:22.985521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.606 [2024-11-15 11:53:22.985545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:22.985554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:22.985782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:22.986003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:22.986013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:22.986021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:22.986028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:22.997681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:22.998220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:22.998241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:22.998248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:22.998401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:22.998553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:22.998559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:22.998577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:22.998583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.010310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.010947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.011000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.011010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.011192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.011349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.011356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.011363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.011370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.023002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.023499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.023522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.023534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.023694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.023847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.023853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.023858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.023864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.035712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.036282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.036324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.036333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.036508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.036674] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.036681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.036687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.036693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.048431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.049026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.049067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.049075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.049249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.049404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.049411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.049417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.049423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.061170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.061813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.061853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.061861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.062034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.062188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.062199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.062206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.062211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.073807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.074317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.074335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.074341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.074493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.074651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.074658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.074663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.074668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.086536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.086995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.087010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.087016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.087167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.087318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.087324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.087330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.087335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.607 [2024-11-15 11:53:23.099196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.607 [2024-11-15 11:53:23.099701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.607 [2024-11-15 11:53:23.099736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.607 [2024-11-15 11:53:23.099745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.607 [2024-11-15 11:53:23.099916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.607 [2024-11-15 11:53:23.100070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.607 [2024-11-15 11:53:23.100077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.607 [2024-11-15 11:53:23.100083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.607 [2024-11-15 11:53:23.100092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.871 [2024-11-15 11:53:23.111804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.871 [2024-11-15 11:53:23.112309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.871 [2024-11-15 11:53:23.112325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.871 [2024-11-15 11:53:23.112330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.871 [2024-11-15 11:53:23.112481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.871 [2024-11-15 11:53:23.112637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.871 [2024-11-15 11:53:23.112643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.871 [2024-11-15 11:53:23.112648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.871 [2024-11-15 11:53:23.112653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.871 [2024-11-15 11:53:23.124506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.871 [2024-11-15 11:53:23.125039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.871 [2024-11-15 11:53:23.125071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.871 [2024-11-15 11:53:23.125080] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.871 [2024-11-15 11:53:23.125247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.871 [2024-11-15 11:53:23.125400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.871 [2024-11-15 11:53:23.125406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.871 [2024-11-15 11:53:23.125412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.871 [2024-11-15 11:53:23.125418] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.871 [2024-11-15 11:53:23.137125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.871 [2024-11-15 11:53:23.137478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.871 [2024-11-15 11:53:23.137493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.871 [2024-11-15 11:53:23.137499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.871 [2024-11-15 11:53:23.137653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.871 [2024-11-15 11:53:23.137803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.871 [2024-11-15 11:53:23.137809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.871 [2024-11-15 11:53:23.137814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.871 [2024-11-15 11:53:23.137819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.871 [2024-11-15 11:53:23.149793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.871 [2024-11-15 11:53:23.150281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.871 [2024-11-15 11:53:23.150293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.871 [2024-11-15 11:53:23.150299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.871 [2024-11-15 11:53:23.150448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.150602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.150608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.150613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.150618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.162447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.162882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.162896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.162902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.163053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.163203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.163208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.163213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.163218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.175111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.175784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.175814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.175824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.175994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.176147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.176154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.176159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.176165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.187737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.188314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.188344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.188352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.188522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.188681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.188688] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.188694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.188700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.200411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.200935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.200951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.200956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.201110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.201268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.201274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.201279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.201284] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.213130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.213665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.213696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.213704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.213873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.214026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.214032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.214038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.214043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.225769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.226344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.226375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.226384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.226553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.226711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.226726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.226731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.226737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.238440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.238943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.238959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.238964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.239115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.239265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.239270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.239275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.239280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.251120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.251661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.251691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.251701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.251870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.252025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.252032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.252037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.252043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.263753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.264349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.264380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.264388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.264555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.872 [2024-11-15 11:53:23.264715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.872 [2024-11-15 11:53:23.264723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.872 [2024-11-15 11:53:23.264729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.872 [2024-11-15 11:53:23.264738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.872 [2024-11-15 11:53:23.276520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.872 [2024-11-15 11:53:23.277098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.872 [2024-11-15 11:53:23.277129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.872 [2024-11-15 11:53:23.277139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.872 [2024-11-15 11:53:23.277306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.277459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.277465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.277471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.277477] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.289192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.289570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.289587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.289592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.289743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.289893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.289898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.289903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.289908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.301890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.302356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.302369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.302374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.302524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.302678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.302684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.302690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.302694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.314521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.314980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.314992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.314997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.315147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.315297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.315302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.315307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.315312] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.327157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.327635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.327648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.327653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.327803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.327954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.327960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.327964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.327969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.339794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.340240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.340252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.340257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.340407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.340557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.340568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.340573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.340578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.352411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.352969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.352999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.353008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.353178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.353331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.353337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.353343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.353348] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:57.873 [2024-11-15 11:53:23.365054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:57.873 [2024-11-15 11:53:23.365548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.873 [2024-11-15 11:53:23.365567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:57.873 [2024-11-15 11:53:23.365573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:57.873 [2024-11-15 11:53:23.365724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:57.873 [2024-11-15 11:53:23.365873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:57.873 [2024-11-15 11:53:23.365880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:57.873 [2024-11-15 11:53:23.365885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:57.873 [2024-11-15 11:53:23.365889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.136 [2024-11-15 11:53:23.377728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.136 [2024-11-15 11:53:23.378215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.136 [2024-11-15 11:53:23.378228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.136 [2024-11-15 11:53:23.378233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.136 [2024-11-15 11:53:23.378384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.136 [2024-11-15 11:53:23.378533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.136 [2024-11-15 11:53:23.378539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.136 [2024-11-15 11:53:23.378544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.136 [2024-11-15 11:53:23.378549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.136 [2024-11-15 11:53:23.390394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.136 [2024-11-15 11:53:23.390846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.136 [2024-11-15 11:53:23.390859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.136 [2024-11-15 11:53:23.390864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.136 [2024-11-15 11:53:23.391014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.136 [2024-11-15 11:53:23.391164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.136 [2024-11-15 11:53:23.391173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.136 [2024-11-15 11:53:23.391178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.136 [2024-11-15 11:53:23.391182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.136 [2024-11-15 11:53:23.403016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.136 [2024-11-15 11:53:23.403448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.136 [2024-11-15 11:53:23.403479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.136 [2024-11-15 11:53:23.403487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.136 [2024-11-15 11:53:23.403660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.136 [2024-11-15 11:53:23.403814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.136 [2024-11-15 11:53:23.403820] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.136 [2024-11-15 11:53:23.403826] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.136 [2024-11-15 11:53:23.403831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.136 [2024-11-15 11:53:23.415673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.136 [2024-11-15 11:53:23.416029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.136 [2024-11-15 11:53:23.416044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.136 [2024-11-15 11:53:23.416049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.136 [2024-11-15 11:53:23.416199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.136 [2024-11-15 11:53:23.416349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.136 [2024-11-15 11:53:23.416355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.136 [2024-11-15 11:53:23.416360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.136 [2024-11-15 11:53:23.416365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.136 [2024-11-15 11:53:23.428352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.136 [2024-11-15 11:53:23.428604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.136 [2024-11-15 11:53:23.428618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.136 [2024-11-15 11:53:23.428624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.136 [2024-11-15 11:53:23.428775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.136 [2024-11-15 11:53:23.428925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.136 [2024-11-15 11:53:23.428931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.136 [2024-11-15 11:53:23.428936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.428945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.441068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.441509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.441522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.441527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.441682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.441833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.441838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.441843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.441848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.453679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.454132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.454144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.454149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.454299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.454449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.454454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.454459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.454464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.466294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.466799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.466812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.466817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.466967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.467116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.467122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.467127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.467131] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.478963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.479503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.479534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.479543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.479719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.479873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.479879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.479885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.479891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.491605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.492094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.492109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.492115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.492265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.492415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.492420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.492426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.492430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.504273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.504905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.504936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.504945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.505112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.505265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.505271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.505277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.505282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.516927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.517523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.517553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.517567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.517740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.517892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.517898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.517905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.517910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.529610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.529968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.529984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.529990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.530142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.530292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.530298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.530303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.530309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.542278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.542631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.542644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.542650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.542800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.542949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.137 [2024-11-15 11:53:23.542954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.137 [2024-11-15 11:53:23.542959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.137 [2024-11-15 11:53:23.542964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.137 [2024-11-15 11:53:23.554936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.137 [2024-11-15 11:53:23.555390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.137 [2024-11-15 11:53:23.555402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.137 [2024-11-15 11:53:23.555407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.137 [2024-11-15 11:53:23.555557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.137 [2024-11-15 11:53:23.555711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.138 [2024-11-15 11:53:23.555721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.138 [2024-11-15 11:53:23.555726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.138 [2024-11-15 11:53:23.555731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.138 [2024-11-15 11:53:23.567566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.138 [2024-11-15 11:53:23.568023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.138 [2024-11-15 11:53:23.568035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.138 [2024-11-15 11:53:23.568040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.138 [2024-11-15 11:53:23.568190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.138 [2024-11-15 11:53:23.568340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.138 [2024-11-15 11:53:23.568346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.138 [2024-11-15 11:53:23.568350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.138 [2024-11-15 11:53:23.568355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.138 [2024-11-15 11:53:23.580245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.138 [2024-11-15 11:53:23.580813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.138 [2024-11-15 11:53:23.580843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.138 [2024-11-15 11:53:23.580852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.138 [2024-11-15 11:53:23.581018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.138 [2024-11-15 11:53:23.581171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.138 [2024-11-15 11:53:23.581177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.138 [2024-11-15 11:53:23.581183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.138 [2024-11-15 11:53:23.581188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.138 [2024-11-15 11:53:23.592903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.138 [2024-11-15 11:53:23.593347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.138 [2024-11-15 11:53:23.593362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.138 [2024-11-15 11:53:23.593367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.138 [2024-11-15 11:53:23.593518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.138 [2024-11-15 11:53:23.593673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.138 [2024-11-15 11:53:23.593679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.138 [2024-11-15 11:53:23.593684] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.138 [2024-11-15 11:53:23.593692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.138 [2024-11-15 11:53:23.605520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.138 [2024-11-15 11:53:23.606095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.138 [2024-11-15 11:53:23.606126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.138 [2024-11-15 11:53:23.606134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.138 [2024-11-15 11:53:23.606301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.138 [2024-11-15 11:53:23.606454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.138 [2024-11-15 11:53:23.606460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.138 [2024-11-15 11:53:23.606466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.138 [2024-11-15 11:53:23.606471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.138 6727.75 IOPS, 26.28 MiB/s [2024-11-15T10:53:23.636Z] [2024-11-15 11:53:23.618182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.138 [2024-11-15 11:53:23.618710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.138 [2024-11-15 11:53:23.618740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.138 [2024-11-15 11:53:23.618749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.138 [2024-11-15 11:53:23.618926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.138 [2024-11-15 11:53:23.619079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.138 [2024-11-15 11:53:23.619086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.138 [2024-11-15 11:53:23.619092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.138 [2024-11-15 11:53:23.619097] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.138 [2024-11-15 11:53:23.630805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.631380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.631411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.631420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.631594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.631748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.631755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.631760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.631766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.643463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.644103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.644133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.644141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.644308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.644461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.644468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.644473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.644479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.656182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.656653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.656684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.656693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.656862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.657015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.657021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.657027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.657032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.668876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.669378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.669408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.669417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.669590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.669743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.669750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.669755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.669761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.681602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.682193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.682223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.682235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.682402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.682554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.682561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.682574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.682579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.694280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.694699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.694729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.694738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.694907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.695060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.695066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.695072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.695078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.706947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.707404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.707419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.707425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.707581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.707732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.707738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.707743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.707748] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.719591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.719964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.719977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.719982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.720132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.720286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.720291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.720296] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.720301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.732268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.732761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.732774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.732779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.401 [2024-11-15 11:53:23.732929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.401 [2024-11-15 11:53:23.733078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.401 [2024-11-15 11:53:23.733084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.401 [2024-11-15 11:53:23.733089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.401 [2024-11-15 11:53:23.733094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.401 [2024-11-15 11:53:23.744932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.401 [2024-11-15 11:53:23.745506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.401 [2024-11-15 11:53:23.745536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.401 [2024-11-15 11:53:23.745545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.745720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.745873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.745880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.745885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.745891] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.757587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.757962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.757977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.757983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.758134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.758284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.758290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.758294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.758303] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.770281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.770951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.770982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.770991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.771158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.771311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.771317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.771323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.771328] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.782883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.783255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.783270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.783276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.783426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.783580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.783586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.783591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.783596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.795581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.796029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.796042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.796047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.796197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.796347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.796352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.796357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.796362] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.808197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.808579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.808594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.808600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.808751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.808901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.808907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.808912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.808917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.820894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.821479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.821509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.821517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.821691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.821845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.821851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.821857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.821863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.833553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.834023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.834038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.834044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.834194] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.834344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.834350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.834355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.834359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.846192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.846697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.846728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.846743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.846912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.847065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.847071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.847076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.847082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.858922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.859499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.859529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.859538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.402 [2024-11-15 11:53:23.859714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.402 [2024-11-15 11:53:23.859867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.402 [2024-11-15 11:53:23.859874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.402 [2024-11-15 11:53:23.859880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.402 [2024-11-15 11:53:23.859885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.402 [2024-11-15 11:53:23.871580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.402 [2024-11-15 11:53:23.872137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.402 [2024-11-15 11:53:23.872167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.402 [2024-11-15 11:53:23.872176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.403 [2024-11-15 11:53:23.872342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.403 [2024-11-15 11:53:23.872495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.403 [2024-11-15 11:53:23.872501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.403 [2024-11-15 11:53:23.872507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.403 [2024-11-15 11:53:23.872513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.403 [2024-11-15 11:53:23.884210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.403 [2024-11-15 11:53:23.884826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.403 [2024-11-15 11:53:23.884857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.403 [2024-11-15 11:53:23.884866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.403 [2024-11-15 11:53:23.885032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.403 [2024-11-15 11:53:23.885189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.403 [2024-11-15 11:53:23.885196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.403 [2024-11-15 11:53:23.885201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.403 [2024-11-15 11:53:23.885207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.896914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.897486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.897516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.897525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.897702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.666 [2024-11-15 11:53:23.897855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.666 [2024-11-15 11:53:23.897861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.666 [2024-11-15 11:53:23.897867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.666 [2024-11-15 11:53:23.897872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.909568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.910148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.910178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.910187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.910354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.666 [2024-11-15 11:53:23.910507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.666 [2024-11-15 11:53:23.910513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.666 [2024-11-15 11:53:23.910518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.666 [2024-11-15 11:53:23.910524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.922227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.922716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.922731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.922737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.922887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.666 [2024-11-15 11:53:23.923038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.666 [2024-11-15 11:53:23.923043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.666 [2024-11-15 11:53:23.923048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.666 [2024-11-15 11:53:23.923057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.934899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.935390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.935403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.935408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.935558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.666 [2024-11-15 11:53:23.935714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.666 [2024-11-15 11:53:23.935720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.666 [2024-11-15 11:53:23.935725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.666 [2024-11-15 11:53:23.935729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.947553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.948130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.948160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.948169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.948335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.666 [2024-11-15 11:53:23.948488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.666 [2024-11-15 11:53:23.948494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.666 [2024-11-15 11:53:23.948499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.666 [2024-11-15 11:53:23.948505] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.960231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.960847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.960879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.960887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.961054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.666 [2024-11-15 11:53:23.961207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.666 [2024-11-15 11:53:23.961213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.666 [2024-11-15 11:53:23.961218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.666 [2024-11-15 11:53:23.961224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.666 [2024-11-15 11:53:23.972923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.666 [2024-11-15 11:53:23.973505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.666 [2024-11-15 11:53:23.973535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.666 [2024-11-15 11:53:23.973544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.666 [2024-11-15 11:53:23.973718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:23.973872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:23.973878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:23.973884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:23.973889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:23.985578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:23.986153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:23.986183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:23.986192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:23.986358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:23.986510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:23.986517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:23.986523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:23.986528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:23.998235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:23.998848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:23.998879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:23.998888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:23.999055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:23.999208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:23.999214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:23.999221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:23.999227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.010930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.011432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.011463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.011475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.011648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.011802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.011808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.011813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.011819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.023665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.024252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.024282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.024291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.024458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.024619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.024626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.024632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.024637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.036334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.036991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.037021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.037030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.037196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.037349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.037356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.037362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.037368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.049067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.049572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.049587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.049593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.049743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.049894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.049903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.049908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.049913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.061745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.062316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.062346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.062355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.062521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.062682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.062689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.062695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.062701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.074390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.074974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.075005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.075014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.075181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.075333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.075340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.075345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.075350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.087039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.087605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.087636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.667 [2024-11-15 11:53:24.087645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.667 [2024-11-15 11:53:24.087814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.667 [2024-11-15 11:53:24.087967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.667 [2024-11-15 11:53:24.087973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.667 [2024-11-15 11:53:24.087978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.667 [2024-11-15 11:53:24.087988] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.667 [2024-11-15 11:53:24.099698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.667 [2024-11-15 11:53:24.100165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.667 [2024-11-15 11:53:24.100196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.668 [2024-11-15 11:53:24.100204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.668 [2024-11-15 11:53:24.100370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.668 [2024-11-15 11:53:24.100523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.668 [2024-11-15 11:53:24.100530] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.668 [2024-11-15 11:53:24.100535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.668 [2024-11-15 11:53:24.100540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.668 [2024-11-15 11:53:24.112385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.668 [2024-11-15 11:53:24.112945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:53:24.112975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.668 [2024-11-15 11:53:24.112984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.668 [2024-11-15 11:53:24.113150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.668 [2024-11-15 11:53:24.113303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.668 [2024-11-15 11:53:24.113309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.668 [2024-11-15 11:53:24.113315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.668 [2024-11-15 11:53:24.113321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.668 [2024-11-15 11:53:24.125030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.668 [2024-11-15 11:53:24.125614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:53:24.125645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.668 [2024-11-15 11:53:24.125654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.668 [2024-11-15 11:53:24.125823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.668 [2024-11-15 11:53:24.125976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.668 [2024-11-15 11:53:24.125982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.668 [2024-11-15 11:53:24.125988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.668 [2024-11-15 11:53:24.125993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.668 [2024-11-15 11:53:24.137691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.668 [2024-11-15 11:53:24.138271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:53:24.138301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.668 [2024-11-15 11:53:24.138310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.668 [2024-11-15 11:53:24.138477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.668 [2024-11-15 11:53:24.138637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.668 [2024-11-15 11:53:24.138644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.668 [2024-11-15 11:53:24.138650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.668 [2024-11-15 11:53:24.138655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.668 [2024-11-15 11:53:24.150347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.668 [2024-11-15 11:53:24.150711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.668 [2024-11-15 11:53:24.150726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.668 [2024-11-15 11:53:24.150732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.668 [2024-11-15 11:53:24.150883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.668 [2024-11-15 11:53:24.151033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.668 [2024-11-15 11:53:24.151039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.668 [2024-11-15 11:53:24.151044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.668 [2024-11-15 11:53:24.151049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.930 [2024-11-15 11:53:24.163187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.930 [2024-11-15 11:53:24.163639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.930 [2024-11-15 11:53:24.163653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.930 [2024-11-15 11:53:24.163659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.930 [2024-11-15 11:53:24.163810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.930 [2024-11-15 11:53:24.163960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.930 [2024-11-15 11:53:24.163966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.930 [2024-11-15 11:53:24.163971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.930 [2024-11-15 11:53:24.163976] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.930 [2024-11-15 11:53:24.175808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.930 [2024-11-15 11:53:24.176371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.930 [2024-11-15 11:53:24.176401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.930 [2024-11-15 11:53:24.176413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.930 [2024-11-15 11:53:24.176587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.930 [2024-11-15 11:53:24.176741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.930 [2024-11-15 11:53:24.176747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.930 [2024-11-15 11:53:24.176753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.930 [2024-11-15 11:53:24.176759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.930 [2024-11-15 11:53:24.188447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.930 [2024-11-15 11:53:24.188999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.930 [2024-11-15 11:53:24.189030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.930 [2024-11-15 11:53:24.189039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.930 [2024-11-15 11:53:24.189206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.930 [2024-11-15 11:53:24.189359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.930 [2024-11-15 11:53:24.189365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.930 [2024-11-15 11:53:24.189371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.930 [2024-11-15 11:53:24.189376] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.930 [2024-11-15 11:53:24.201092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.930 [2024-11-15 11:53:24.201665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.930 [2024-11-15 11:53:24.201696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.930 [2024-11-15 11:53:24.201704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.930 [2024-11-15 11:53:24.201871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.930 [2024-11-15 11:53:24.202023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.930 [2024-11-15 11:53:24.202029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.930 [2024-11-15 11:53:24.202035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.930 [2024-11-15 11:53:24.202040] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.930 [2024-11-15 11:53:24.213746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.930 [2024-11-15 11:53:24.214241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.930 [2024-11-15 11:53:24.214256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.930 [2024-11-15 11:53:24.214262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.930 [2024-11-15 11:53:24.214412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.930 [2024-11-15 11:53:24.214604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.930 [2024-11-15 11:53:24.214614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.930 [2024-11-15 11:53:24.214619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.930 [2024-11-15 11:53:24.214624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.930 [2024-11-15 11:53:24.226463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.930 [2024-11-15 11:53:24.227031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.930 [2024-11-15 11:53:24.227061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.930 [2024-11-15 11:53:24.227070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.227237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.227389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.227396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.227401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.227407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.239110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.239636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.239645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.239813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.239966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.239972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.239977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.239983] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.251833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.252331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.252345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.252351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.252502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.252657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.252664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.252669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.252678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.264508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.265089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.265119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.265128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.265295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.265447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.265453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.265459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.265464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.277164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.277794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.277825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.277834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.278001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.278155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.278162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.278168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.278174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.289876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.290435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.290465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.290474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.290654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.290808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.290814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.290820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.290825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.302521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.303065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.303094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.303103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.303270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.303422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.303428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.303434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.303440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.315220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.315702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.315732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.315741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.315910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.316062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.316069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.316074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.316079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.327929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.328433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.328463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.328472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.328645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.328799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.328805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.328810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.328816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.340650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.341190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.931 [2024-11-15 11:53:24.341221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.931 [2024-11-15 11:53:24.341233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.931 [2024-11-15 11:53:24.341399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.931 [2024-11-15 11:53:24.341552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.931 [2024-11-15 11:53:24.341559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.931 [2024-11-15 11:53:24.341573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.931 [2024-11-15 11:53:24.341578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.931 [2024-11-15 11:53:24.353268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.931 [2024-11-15 11:53:24.353731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.932 [2024-11-15 11:53:24.353746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.932 [2024-11-15 11:53:24.353752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.932 [2024-11-15 11:53:24.353903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.932 [2024-11-15 11:53:24.354052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.932 [2024-11-15 11:53:24.354058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.932 [2024-11-15 11:53:24.354063] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.932 [2024-11-15 11:53:24.354068] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.932 [2024-11-15 11:53:24.365898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.932 [2024-11-15 11:53:24.366379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.932 [2024-11-15 11:53:24.366391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.932 [2024-11-15 11:53:24.366397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.932 [2024-11-15 11:53:24.366546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.932 [2024-11-15 11:53:24.366703] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.932 [2024-11-15 11:53:24.366709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.932 [2024-11-15 11:53:24.366714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.932 [2024-11-15 11:53:24.366719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.932 [2024-11-15 11:53:24.378541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.932 [2024-11-15 11:53:24.379035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.932 [2024-11-15 11:53:24.379048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.932 [2024-11-15 11:53:24.379053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.932 [2024-11-15 11:53:24.379203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.932 [2024-11-15 11:53:24.379352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.932 [2024-11-15 11:53:24.379361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.932 [2024-11-15 11:53:24.379366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.932 [2024-11-15 11:53:24.379371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.932 [2024-11-15 11:53:24.391200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.932 [2024-11-15 11:53:24.391689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.932 [2024-11-15 11:53:24.391720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.932 [2024-11-15 11:53:24.391729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.932 [2024-11-15 11:53:24.391897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.932 [2024-11-15 11:53:24.392050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.932 [2024-11-15 11:53:24.392056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.932 [2024-11-15 11:53:24.392062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.932 [2024-11-15 11:53:24.392067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.932 [2024-11-15 11:53:24.403914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.932 [2024-11-15 11:53:24.404483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.932 [2024-11-15 11:53:24.404513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.932 [2024-11-15 11:53:24.404521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.932 [2024-11-15 11:53:24.404695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.932 [2024-11-15 11:53:24.404848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.932 [2024-11-15 11:53:24.404854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.932 [2024-11-15 11:53:24.404860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.932 [2024-11-15 11:53:24.404865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.932 [2024-11-15 11:53:24.416556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.932 [2024-11-15 11:53:24.417121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.932 [2024-11-15 11:53:24.417152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:58.932 [2024-11-15 11:53:24.417160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:58.932 [2024-11-15 11:53:24.417327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:58.932 [2024-11-15 11:53:24.417479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.932 [2024-11-15 11:53:24.417485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.932 [2024-11-15 11:53:24.417491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.932 [2024-11-15 11:53:24.417500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.200 [2024-11-15 11:53:24.429215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.429831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.429870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.430036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.430189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.430195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.430201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.430206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.441904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.442380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.442410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.442419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.442592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.442746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.442752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.442757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.442763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.454596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.455161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.455192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.455200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.455367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.455520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.455526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.455531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.455537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.467233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.467859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.467890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.467898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.468064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.468217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.468223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.468229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.468234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.479933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.480508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.480538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.480547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.480721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.480874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.480880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.480886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.480892] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.492583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.493153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.493184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.493192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.493359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.493511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.493518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.493523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.493529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.505230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.505864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.505895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.505903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.506073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.506226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.506232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.506237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.506243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.517935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.518426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.518441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.518446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.518601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.518753] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.518758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.518763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.518768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.530611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.531182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.531212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.531221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.531387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.531540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.531547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.531552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.531557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.543333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.543714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.543729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.543736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.543887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.544038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.544049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.544054] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.544060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.556037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.556539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.556552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.556557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.556711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.556862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.201 [2024-11-15 11:53:24.556867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.201 [2024-11-15 11:53:24.556872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.201 [2024-11-15 11:53:24.556877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.201 [2024-11-15 11:53:24.568712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.201 [2024-11-15 11:53:24.569163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.201 [2024-11-15 11:53:24.569175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.201 [2024-11-15 11:53:24.569180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.201 [2024-11-15 11:53:24.569330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.201 [2024-11-15 11:53:24.569479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.569485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.569490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.569495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.581329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.581888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.581918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.581927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.582093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.582246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.582252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.582257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.582266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.593965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.594517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.594547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.594556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.594735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.594889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.594894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.594900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.594906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.606603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.607162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.607192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.607201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.607368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.607520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.607526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.607532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.607537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 5382.20 IOPS, 21.02 MiB/s [2024-11-15T10:53:24.700Z] [2024-11-15 11:53:24.619247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.619839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.619869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.619878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.620044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.620197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.620203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.620208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.620214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.631922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.632520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.632550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.632559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.632733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.632886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.632892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.632898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.632903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.644586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.645189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.645219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.645228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.645394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.645547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.645553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.645559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.645570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.657263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.657882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.657912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.657921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.658087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.658239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.658246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.658251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.658257] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.669954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.670443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.670473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.670485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.670663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.670817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.670823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.670828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.670834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.202 [2024-11-15 11:53:24.682673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.202 [2024-11-15 11:53:24.683206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.202 [2024-11-15 11:53:24.683236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.202 [2024-11-15 11:53:24.683244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.202 [2024-11-15 11:53:24.683411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.202 [2024-11-15 11:53:24.683571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.202 [2024-11-15 11:53:24.683578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.202 [2024-11-15 11:53:24.683583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.202 [2024-11-15 11:53:24.683588] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.536 [2024-11-15 11:53:24.695296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.536 [2024-11-15 11:53:24.695883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.536 [2024-11-15 11:53:24.695914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.536 [2024-11-15 11:53:24.695923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.536 [2024-11-15 11:53:24.696090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.536 [2024-11-15 11:53:24.696243] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.536 [2024-11-15 11:53:24.696249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.536 [2024-11-15 11:53:24.696255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.536 [2024-11-15 11:53:24.696260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.536 [2024-11-15 11:53:24.707962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.536 [2024-11-15 11:53:24.708514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.536 [2024-11-15 11:53:24.708544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.536 [2024-11-15 11:53:24.708553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.536 [2024-11-15 11:53:24.708728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.536 [2024-11-15 11:53:24.708885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.536 [2024-11-15 11:53:24.708891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.536 [2024-11-15 11:53:24.708897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.536 [2024-11-15 11:53:24.708902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.536 [2024-11-15 11:53:24.720622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.536 [2024-11-15 11:53:24.721191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.536 [2024-11-15 11:53:24.721221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.536 [2024-11-15 11:53:24.721229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.536 [2024-11-15 11:53:24.721395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.536 [2024-11-15 11:53:24.721548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.536 [2024-11-15 11:53:24.721556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.536 [2024-11-15 11:53:24.721571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.536 [2024-11-15 11:53:24.721577] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.536 [2024-11-15 11:53:24.733283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.536 [2024-11-15 11:53:24.733842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.536 [2024-11-15 11:53:24.733872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.536 [2024-11-15 11:53:24.733881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.536 [2024-11-15 11:53:24.734047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.536 [2024-11-15 11:53:24.734200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.536 [2024-11-15 11:53:24.734206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.536 [2024-11-15 11:53:24.734212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.536 [2024-11-15 11:53:24.734217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.536 [2024-11-15 11:53:24.745919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.536 [2024-11-15 11:53:24.746491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.536 [2024-11-15 11:53:24.746522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.536 [2024-11-15 11:53:24.746530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.536 [2024-11-15 11:53:24.746704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.536 [2024-11-15 11:53:24.746858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.536 [2024-11-15 11:53:24.746864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.536 [2024-11-15 11:53:24.746870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.536 [2024-11-15 11:53:24.746878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.536 [2024-11-15 11:53:24.758576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.536 [2024-11-15 11:53:24.759052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.536 [2024-11-15 11:53:24.759066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.536 [2024-11-15 11:53:24.759072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.759223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.759373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.759379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.759384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.759388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.771217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.771602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.771623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.771629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.771784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.771935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.771941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.771946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.771951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.783924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.784525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.784555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.784571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.784739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.784892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.784899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.784904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.784910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.796627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.797207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.797238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.797247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.797414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.797576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.797583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.797588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.797594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.809295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.809848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.809878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.809887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.810053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.810206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.810212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.810218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.810223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.821911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.822479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.822509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.822518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.822703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.822858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.822864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.822869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.822875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.834565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.835136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.835166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.835178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.835345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.835498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.835504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.835510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.835515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.847214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.847682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.847712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.847721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.847890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.848043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.848049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.848055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.848060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.859897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.860469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.860499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.860508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.537 [2024-11-15 11:53:24.860683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.537 [2024-11-15 11:53:24.860837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.537 [2024-11-15 11:53:24.860843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.537 [2024-11-15 11:53:24.860849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.537 [2024-11-15 11:53:24.860854] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.537 [2024-11-15 11:53:24.872541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.537 [2024-11-15 11:53:24.873122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.537 [2024-11-15 11:53:24.873153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.537 [2024-11-15 11:53:24.873161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.873330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.873487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.873493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.873498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.873504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.885200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.885676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.885692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.885698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.885849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.885999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.886004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.886009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.886014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.897874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.898363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.898377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.898382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.898532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.898686] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.898693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.898698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.898702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.910547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.910902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.910916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.910921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.911071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.911222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.911227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.911232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.911240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.923243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.923864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.923895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.923904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.924070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.924225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.924231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.924236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.924242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.935947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.936433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.936448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.936453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.936609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.936760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.936765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.936771] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.936775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.948644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.949107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.949138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.949146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.949313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.949466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.949473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.949478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.949484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.961343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.961929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.961959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.961968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.962134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.962287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.962294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.962300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.962305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.974021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.974517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.974532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.974538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.974693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.974844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.974850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.974855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.974860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.986636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.538 [2024-11-15 11:53:24.987104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.538 [2024-11-15 11:53:24.987117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.538 [2024-11-15 11:53:24.987122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.538 [2024-11-15 11:53:24.987272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.538 [2024-11-15 11:53:24.987422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.538 [2024-11-15 11:53:24.987427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.538 [2024-11-15 11:53:24.987432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.538 [2024-11-15 11:53:24.987436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.538 [2024-11-15 11:53:24.999293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.539 [2024-11-15 11:53:24.999908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.539 [2024-11-15 11:53:24.999939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.539 [2024-11-15 11:53:24.999950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.539 [2024-11-15 11:53:25.000117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.539 [2024-11-15 11:53:25.000271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.539 [2024-11-15 11:53:25.000277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.539 [2024-11-15 11:53:25.000282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.539 [2024-11-15 11:53:25.000287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.539 [2024-11-15 11:53:25.011994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.539 [2024-11-15 11:53:25.012475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.539 [2024-11-15 11:53:25.012504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.539 [2024-11-15 11:53:25.012513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.539 [2024-11-15 11:53:25.012688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.539 [2024-11-15 11:53:25.012842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.539 [2024-11-15 11:53:25.012848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.539 [2024-11-15 11:53:25.012853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.539 [2024-11-15 11:53:25.012859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.810 [2024-11-15 11:53:25.024710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.810 [2024-11-15 11:53:25.025283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.810 [2024-11-15 11:53:25.025313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.810 [2024-11-15 11:53:25.025322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.810 [2024-11-15 11:53:25.025488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.810 [2024-11-15 11:53:25.025648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.810 [2024-11-15 11:53:25.025655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.810 [2024-11-15 11:53:25.025660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.810 [2024-11-15 11:53:25.025666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.810 [2024-11-15 11:53:25.037371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.810 [2024-11-15 11:53:25.037873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.810 [2024-11-15 11:53:25.037903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.810 [2024-11-15 11:53:25.037912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.810 [2024-11-15 11:53:25.038078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.810 [2024-11-15 11:53:25.038234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.810 [2024-11-15 11:53:25.038242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.810 [2024-11-15 11:53:25.038247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.810 [2024-11-15 11:53:25.038253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.810 [2024-11-15 11:53:25.050112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.810 [2024-11-15 11:53:25.050800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.810 [2024-11-15 11:53:25.050831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.810 [2024-11-15 11:53:25.050840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.810 [2024-11-15 11:53:25.051008] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.810 [2024-11-15 11:53:25.051160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.810 [2024-11-15 11:53:25.051166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.810 [2024-11-15 11:53:25.051172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.810 [2024-11-15 11:53:25.051177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.810 [2024-11-15 11:53:25.062751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.810 [2024-11-15 11:53:25.063335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.810 [2024-11-15 11:53:25.063366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.063374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.063541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.063701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.063709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.063714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.063720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.075434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.075973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.075989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.075994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.076145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.076296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.076302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.076307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.076319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.088173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.088622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.088636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.088642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.088792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.088942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.088948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.088955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.088959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.100824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.101315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.101328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.101333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.101483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.101638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.101644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.101649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.101653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.113498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.113962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.113975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.113980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.114130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.114280] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.114286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.114290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.114295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.126154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.126622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.126635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.126641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.126791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.126941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.126947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.126952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.126956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.138806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.139294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.139307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.139312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.139462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.139616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.139622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.139627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.139632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.151478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.151940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.151952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.151958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.152107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.152257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.152262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.152267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.152272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.164262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.164623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.164637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.164646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.164797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.164947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.164952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.164957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.164962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.176949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.177401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.811 [2024-11-15 11:53:25.177414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.811 [2024-11-15 11:53:25.177419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.811 [2024-11-15 11:53:25.177575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.811 [2024-11-15 11:53:25.177725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.811 [2024-11-15 11:53:25.177731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.811 [2024-11-15 11:53:25.177736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.811 [2024-11-15 11:53:25.177741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.811 [2024-11-15 11:53:25.189583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.811 [2024-11-15 11:53:25.190159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.190189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.190198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.190364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.190517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.190523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.190529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.190534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 [2024-11-15 11:53:25.202269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.202682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.202712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.202721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.202890] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.203046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.203053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.203059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.203064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 [2024-11-15 11:53:25.214910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.215377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.215392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.215398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.215549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.215704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.215709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.215714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.215719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 [2024-11-15 11:53:25.227566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.228059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.228072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.228077] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.228227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.228377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.228383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.228388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.228392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1256395 Killed "${NVMF_APP[@]}" "$@" 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:59.812 [2024-11-15 11:53:25.240240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:59.812 [2024-11-15 11:53:25.240609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.240623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.240629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:59.812 [2024-11-15 11:53:25.240779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:59.812 [2024-11-15 11:53:25.240932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.240939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.240944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.240949] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1258023 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1258023 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 1258023 ']' 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:59.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:59.812 11:53:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:59.812 [2024-11-15 11:53:25.252945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.253361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.253375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.253382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.253532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.253688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.253695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.253700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.253706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 [2024-11-15 11:53:25.265552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.266122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.266153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.266161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.266328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.266481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.266487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.266497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.266503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 [2024-11-15 11:53:25.278231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.278844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.278875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.278884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.279051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.279204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.812 [2024-11-15 11:53:25.279210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.812 [2024-11-15 11:53:25.279215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.812 [2024-11-15 11:53:25.279221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.812 [2024-11-15 11:53:25.290932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.812 [2024-11-15 11:53:25.291436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.812 [2024-11-15 11:53:25.291451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:29:59.812 [2024-11-15 11:53:25.291457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:29:59.812 [2024-11-15 11:53:25.291613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:29:59.812 [2024-11-15 11:53:25.291764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.813 [2024-11-15 11:53:25.291770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.813 [2024-11-15 11:53:25.291775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.813 [2024-11-15 11:53:25.291781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.126 [2024-11-15 11:53:25.303643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.126 [2024-11-15 11:53:25.303804] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:30:00.126 [2024-11-15 11:53:25.303849] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.126 [2024-11-15 11:53:25.304121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.126 [2024-11-15 11:53:25.304134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.126 [2024-11-15 11:53:25.304140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.126 [2024-11-15 11:53:25.304290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.126 [2024-11-15 11:53:25.304440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.126 [2024-11-15 11:53:25.304450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.126 [2024-11-15 11:53:25.304456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.304461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.316309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.316766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.316779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.316784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.316935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.317084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.317090] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.317095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.317099] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.328957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.329443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.329456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.329461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.329615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.329767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.329773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.329778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.329782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.341567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.342029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.342043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.342048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.342199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.342348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.342354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.342359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.342364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.354213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.354660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.354691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.354701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.354871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.355025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.355031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.355037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.355043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.366893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.367480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.367511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.367520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.367693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.367847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.367853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.367859] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.367865] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.379576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.380207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.380237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.380246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.380413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.380571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.380578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.380583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.380589] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.392291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.392806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.392825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.392831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.392982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.393133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.393138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.393144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.393148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.397954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:00.127 [2024-11-15 11:53:25.404997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.405516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.405529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.405535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.405690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.405841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.405847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.405852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.405857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.417711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.127 [2024-11-15 11:53:25.418334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.127 [2024-11-15 11:53:25.418365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.127 [2024-11-15 11:53:25.418374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.127 [2024-11-15 11:53:25.418545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.127 [2024-11-15 11:53:25.418705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.127 [2024-11-15 11:53:25.418712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.127 [2024-11-15 11:53:25.418717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.127 [2024-11-15 11:53:25.418723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.127 [2024-11-15 11:53:25.427052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.127 [2024-11-15 11:53:25.427075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.127 [2024-11-15 11:53:25.427081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.127 [2024-11-15 11:53:25.427087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.127 [2024-11-15 11:53:25.427094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.127 [2024-11-15 11:53:25.428198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.128 [2024-11-15 11:53:25.428354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.128 [2024-11-15 11:53:25.428357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:00.128 [2024-11-15 11:53:25.430443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.431025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.431056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.431065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.431234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.431386] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.431393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.431398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.431404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.443123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.443686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.443717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.443726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.443896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.444049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.444055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.444061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.444067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.455789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.456271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.456286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.456293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.456443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.456596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.456603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.456608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.456618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.468465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.468815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.468829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.468836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.468987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.469137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.469143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.469148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.469153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.481137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.481609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.481631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.481637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.481794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.481946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.481951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.481957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.481962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.493814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.494280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.494293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.494299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.494448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.494643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.494651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.494656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.494660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.506494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.507016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.507034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.507039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.507190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.507340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.507345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.507350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.507355] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.519202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.519822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.519852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.519861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.520028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.520181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.520187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.520192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.520198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.531919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.532387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.532418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.532427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.532601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.532755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.128 [2024-11-15 11:53:25.532761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.128 [2024-11-15 11:53:25.532766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.128 [2024-11-15 11:53:25.532772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.128 [2024-11-15 11:53:25.544619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.128 [2024-11-15 11:53:25.545127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.128 [2024-11-15 11:53:25.545157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.128 [2024-11-15 11:53:25.545166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.128 [2024-11-15 11:53:25.545337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.128 [2024-11-15 11:53:25.545491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.129 [2024-11-15 11:53:25.545498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.129 [2024-11-15 11:53:25.545503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.129 [2024-11-15 11:53:25.545509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.129 [2024-11-15 11:53:25.557223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.129 [2024-11-15 11:53:25.557836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.129 [2024-11-15 11:53:25.557867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.129 [2024-11-15 11:53:25.557877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.129 [2024-11-15 11:53:25.558044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.129 [2024-11-15 11:53:25.558197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.129 [2024-11-15 11:53:25.558204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.129 [2024-11-15 11:53:25.558211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.129 [2024-11-15 11:53:25.558217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.129 [2024-11-15 11:53:25.569929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.129 [2024-11-15 11:53:25.570394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.129 [2024-11-15 11:53:25.570409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.129 [2024-11-15 11:53:25.570415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.129 [2024-11-15 11:53:25.570570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.129 [2024-11-15 11:53:25.570721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.129 [2024-11-15 11:53:25.570727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.129 [2024-11-15 11:53:25.570732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.129 [2024-11-15 11:53:25.570737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.129 [2024-11-15 11:53:25.582581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.129 [2024-11-15 11:53:25.583011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.129 [2024-11-15 11:53:25.583041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.129 [2024-11-15 11:53:25.583050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.129 [2024-11-15 11:53:25.583217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.129 [2024-11-15 11:53:25.583370] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.129 [2024-11-15 11:53:25.583380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.129 [2024-11-15 11:53:25.583385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.129 [2024-11-15 11:53:25.583391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.129 [2024-11-15 11:53:25.595252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.129 [2024-11-15 11:53:25.595715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.129 [2024-11-15 11:53:25.595730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.129 [2024-11-15 11:53:25.595736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.129 [2024-11-15 11:53:25.595886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.129 [2024-11-15 11:53:25.596036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.129 [2024-11-15 11:53:25.596042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.129 [2024-11-15 11:53:25.596047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.129 [2024-11-15 11:53:25.596051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.607894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.608355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.608367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.608373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.608523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.608676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.608682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.608688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.608693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 4485.17 IOPS, 17.52 MiB/s [2024-11-15T10:53:25.889Z] [2024-11-15 11:53:25.621111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.621570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.621583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.621589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.621739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.621889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.621894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.621899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.621908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.633737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.634200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.634213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.634218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.634368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.634517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.634523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.634528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.634533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.646353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.646971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.647002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.647011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.647178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.647331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.647337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.647342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.647347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.659045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.659611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.659642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.659651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.659820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.659973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.659979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.659985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.659990] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.671689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.672241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.672272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.672281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.672447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.672605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.672612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.672618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.672623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.684313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.391 [2024-11-15 11:53:25.684661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.391 [2024-11-15 11:53:25.684677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.391 [2024-11-15 11:53:25.684682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.391 [2024-11-15 11:53:25.684833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.391 [2024-11-15 11:53:25.684983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.391 [2024-11-15 11:53:25.684988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.391 [2024-11-15 11:53:25.684993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.391 [2024-11-15 11:53:25.684998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.391 [2024-11-15 11:53:25.696975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.697475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.697487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.697493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.697646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.697797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.697803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.697807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.697812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.709629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.710181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.710212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.710221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.710395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.710548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.710554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.710559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.710572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.722267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.722693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.722724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.722732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.722901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.723055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.723061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.723066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.723072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.734934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.735491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.735521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.735530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.735705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.735859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.735865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.735870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.735876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.747577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.748141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.748172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.748180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.748347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.748499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.748509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.748515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.748520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.760229] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.760689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.760704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.760710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.760861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.761011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.761017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.761021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.761026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.772868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.773416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.773446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.773454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.773627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.773781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.773787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.773792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.773798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.785494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.786031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.786046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.786052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.786203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.786353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.786358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.786363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.786372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.798218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.798688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.798702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.798707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.798858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.392 [2024-11-15 11:53:25.799008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.392 [2024-11-15 11:53:25.799013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.392 [2024-11-15 11:53:25.799019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.392 [2024-11-15 11:53:25.799025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.392 [2024-11-15 11:53:25.810864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.392 [2024-11-15 11:53:25.811284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.392 [2024-11-15 11:53:25.811296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.392 [2024-11-15 11:53:25.811301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.392 [2024-11-15 11:53:25.811451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.393 [2024-11-15 11:53:25.811606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.393 [2024-11-15 11:53:25.811613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.393 [2024-11-15 11:53:25.811618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.393 [2024-11-15 11:53:25.811624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.393 [2024-11-15 11:53:25.823485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.393 [2024-11-15 11:53:25.824048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.393 [2024-11-15 11:53:25.824078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.393 [2024-11-15 11:53:25.824087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.393 [2024-11-15 11:53:25.824254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.393 [2024-11-15 11:53:25.824407] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.393 [2024-11-15 11:53:25.824413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.393 [2024-11-15 11:53:25.824418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.393 [2024-11-15 11:53:25.824424] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.393 [2024-11-15 11:53:25.836137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.393 [2024-11-15 11:53:25.836491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.393 [2024-11-15 11:53:25.836506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.393 [2024-11-15 11:53:25.836511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.393 [2024-11-15 11:53:25.836666] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.393 [2024-11-15 11:53:25.836816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.393 [2024-11-15 11:53:25.836822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.393 [2024-11-15 11:53:25.836827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.393 [2024-11-15 11:53:25.836831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.393 [2024-11-15 11:53:25.848808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.393 [2024-11-15 11:53:25.849223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.393 [2024-11-15 11:53:25.849236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.393 [2024-11-15 11:53:25.849241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.393 [2024-11-15 11:53:25.849391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.393 [2024-11-15 11:53:25.849541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.393 [2024-11-15 11:53:25.849546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.393 [2024-11-15 11:53:25.849551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.393 [2024-11-15 11:53:25.849556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.393 [2024-11-15 11:53:25.861533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.393 [2024-11-15 11:53:25.862015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.393 [2024-11-15 11:53:25.862045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.393 [2024-11-15 11:53:25.862054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.393 [2024-11-15 11:53:25.862221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.393 [2024-11-15 11:53:25.862374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.393 [2024-11-15 11:53:25.862380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.393 [2024-11-15 11:53:25.862386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.393 [2024-11-15 11:53:25.862391] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.393 [2024-11-15 11:53:25.874239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.393 [2024-11-15 11:53:25.874565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.393 [2024-11-15 11:53:25.874580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.393 [2024-11-15 11:53:25.874586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.393 [2024-11-15 11:53:25.874741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.393 [2024-11-15 11:53:25.874890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.393 [2024-11-15 11:53:25.874896] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.393 [2024-11-15 11:53:25.874901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.393 [2024-11-15 11:53:25.874907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.655 [2024-11-15 11:53:25.886887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.655 [2024-11-15 11:53:25.887350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.655 [2024-11-15 11:53:25.887363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.655 [2024-11-15 11:53:25.887368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.655 [2024-11-15 11:53:25.887518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.655 [2024-11-15 11:53:25.887673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.655 [2024-11-15 11:53:25.887678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.655 [2024-11-15 11:53:25.887683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.655 [2024-11-15 11:53:25.887688] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.655 [2024-11-15 11:53:25.899527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.655 [2024-11-15 11:53:25.899920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.655 [2024-11-15 11:53:25.899933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.655 [2024-11-15 11:53:25.899938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.655 [2024-11-15 11:53:25.900088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.655 [2024-11-15 11:53:25.900238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.655 [2024-11-15 11:53:25.900243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.900249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.900253] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.912227] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.912705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.912737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.912745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.912915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.913068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.913078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.913084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.913089] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.924945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.925411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.925427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.925432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.925588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.925746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.925752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.925757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.925762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.937597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.938033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.938046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.938051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.938201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.938351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.938356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.938362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.938367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.950196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.950681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.950712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.950720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.950888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.951041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.951047] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.951053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.951062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.962911] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.963357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.963387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.963396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.963569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.963723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.963729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.963734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.963740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.975581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.976147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.976177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.976186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.976352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.976505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.976511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.976516] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.976521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:25.988240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:25.988707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:25.988723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:25.988728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:25.988879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:25.989029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:25.989034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:25.989039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:25.989044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:26.000893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:26.001213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:26.001227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:26.001232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:26.001383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:26.001533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:26.001538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:26.001543] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:26.001549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:26.013557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:26.014013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:26.014026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:26.014031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:26.014181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:26.014331] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:26.014338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.656 [2024-11-15 11:53:26.014343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.656 [2024-11-15 11:53:26.014347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.656 [2024-11-15 11:53:26.026189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.656 [2024-11-15 11:53:26.026535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.656 [2024-11-15 11:53:26.026547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.656 [2024-11-15 11:53:26.026553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.656 [2024-11-15 11:53:26.026707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.656 [2024-11-15 11:53:26.026858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.656 [2024-11-15 11:53:26.026864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.026869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.026873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.038850] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.039299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.039312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.039317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.039470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.039624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.039631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.039636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.039641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.051470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.051894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.051907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.051912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.052062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.052212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.052218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.052223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.052228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.064064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.064520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.064532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.064538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.064691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.064841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.064847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.064852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.064856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.076690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.077255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.077286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.077294] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.077461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.077621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.077632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.077637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.077643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.089340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.089898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.089929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.089938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.090104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.090258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.090264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.090270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.090275] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.657 [2024-11-15 11:53:26.101992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.102585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.102615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.102624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.102791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.102944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.102950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.102955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.102961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.114669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.115267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.115298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.115307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.115473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.115637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.115644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.115650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.115656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 [2024-11-15 11:53:26.127365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.128058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.128090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.128099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.128268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.128420] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.128427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.128432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.657 [2024-11-15 11:53:26.128438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.657 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.657 [2024-11-15 11:53:26.140005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.657 [2024-11-15 11:53:26.140525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.657 [2024-11-15 11:53:26.140555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.657 [2024-11-15 11:53:26.140571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.657 [2024-11-15 11:53:26.140737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.657 [2024-11-15 11:53:26.140807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.657 [2024-11-15 11:53:26.140890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.657 [2024-11-15 11:53:26.140897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.657 [2024-11-15 11:53:26.140903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.658 [2024-11-15 11:53:26.140908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.658 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.658 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:00.658 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.658 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.918 [2024-11-15 11:53:26.152604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.918 [2024-11-15 11:53:26.153097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.918 [2024-11-15 11:53:26.153112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.918 [2024-11-15 11:53:26.153118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.918 [2024-11-15 11:53:26.153268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.918 [2024-11-15 11:53:26.153418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.918 [2024-11-15 11:53:26.153424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.918 [2024-11-15 11:53:26.153429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.918 [2024-11-15 11:53:26.153433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.918 [2024-11-15 11:53:26.165306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.918 [2024-11-15 11:53:26.165742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.918 [2024-11-15 11:53:26.165757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.918 [2024-11-15 11:53:26.165762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.918 [2024-11-15 11:53:26.165913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.918 [2024-11-15 11:53:26.166063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.918 [2024-11-15 11:53:26.166069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.918 [2024-11-15 11:53:26.166074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.918 [2024-11-15 11:53:26.166078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.919 [2024-11-15 11:53:26.177916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.919 [2024-11-15 11:53:26.178472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.919 [2024-11-15 11:53:26.178503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.919 [2024-11-15 11:53:26.178512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.919 [2024-11-15 11:53:26.178689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.919 [2024-11-15 11:53:26.178842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.919 [2024-11-15 11:53:26.178848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.919 [2024-11-15 11:53:26.178854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.919 [2024-11-15 11:53:26.178860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.919 Malloc0 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.919 [2024-11-15 11:53:26.190554] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.919 [2024-11-15 11:53:26.190897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.919 [2024-11-15 11:53:26.190912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.919 [2024-11-15 11:53:26.190918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.919 [2024-11-15 11:53:26.191069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.919 [2024-11-15 11:53:26.191219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.919 [2024-11-15 11:53:26.191224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.919 [2024-11-15 11:53:26.191229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.919 [2024-11-15 11:53:26.191234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.919 [2024-11-15 11:53:26.203225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.919 [2024-11-15 11:53:26.203660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.919 [2024-11-15 11:53:26.203689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11ac000 with addr=10.0.0.2, port=4420 00:30:00.919 [2024-11-15 11:53:26.203697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11ac000 is same with the state(6) to be set 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.919 [2024-11-15 11:53:26.203864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ac000 (9): Bad file descriptor 00:30:00.919 [2024-11-15 11:53:26.204017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.919 [2024-11-15 11:53:26.204023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.919 [2024-11-15 11:53:26.204029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.919 [2024-11-15 11:53:26.204034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.919 [2024-11-15 11:53:26.210971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:00.919 [2024-11-15 11:53:26.215880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:00.919 11:53:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1256837 00:30:00.919 [2024-11-15 11:53:26.280053] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:02.564 4456.14 IOPS, 17.41 MiB/s [2024-11-15T10:53:28.633Z] 5432.62 IOPS, 21.22 MiB/s [2024-11-15T10:53:30.018Z] 6183.78 IOPS, 24.16 MiB/s [2024-11-15T10:53:30.959Z] 6797.80 IOPS, 26.55 MiB/s [2024-11-15T10:53:31.899Z] 7291.00 IOPS, 28.48 MiB/s [2024-11-15T10:53:32.841Z] 7710.25 IOPS, 30.12 MiB/s [2024-11-15T10:53:33.781Z] 8054.31 IOPS, 31.46 MiB/s [2024-11-15T10:53:34.725Z] 8341.64 IOPS, 32.58 MiB/s 00:30:09.227 Latency(us) 00:30:09.227 [2024-11-15T10:53:34.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:09.227 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:09.227 Verification LBA range: start 0x0 length 0x4000 00:30:09.227 Nvme1n1 : 15.01 8603.91 33.61 13638.45 0.00 5733.49 549.55 12451.84 00:30:09.227 [2024-11-15T10:53:34.725Z] =================================================================================================================== 00:30:09.227 [2024-11-15T10:53:34.725Z] Total : 8603.91 33.61 13638.45 0.00 5733.49 549.55 12451.84 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:09.486 rmmod nvme_tcp 00:30:09.486 rmmod nvme_fabrics 00:30:09.486 rmmod nvme_keyring 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1258023 ']' 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1258023 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 1258023 ']' 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 1258023 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1258023 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1258023' 00:30:09.486 killing process with pid 1258023 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 1258023 00:30:09.486 11:53:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 1258023 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.746 11:53:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:11.660 00:30:11.660 real 0m28.407s 00:30:11.660 user 1m3.535s 00:30:11.660 sys 0m7.745s 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:11.660 ************************************ 00:30:11.660 END TEST nvmf_bdevperf 00:30:11.660 ************************************ 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:11.660 11:53:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:11.921 ************************************ 00:30:11.921 START TEST nvmf_target_disconnect 00:30:11.921 ************************************ 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:11.921 * Looking for test storage... 00:30:11.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:11.921 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.922 --rc genhtml_branch_coverage=1 00:30:11.922 --rc genhtml_function_coverage=1 00:30:11.922 --rc genhtml_legend=1 00:30:11.922 --rc geninfo_all_blocks=1 00:30:11.922 --rc geninfo_unexecuted_blocks=1 00:30:11.922 00:30:11.922 ' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.922 --rc genhtml_branch_coverage=1 00:30:11.922 --rc genhtml_function_coverage=1 00:30:11.922 --rc genhtml_legend=1 00:30:11.922 --rc geninfo_all_blocks=1 00:30:11.922 --rc geninfo_unexecuted_blocks=1 00:30:11.922 00:30:11.922 ' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.922 --rc genhtml_branch_coverage=1 00:30:11.922 --rc genhtml_function_coverage=1 00:30:11.922 --rc genhtml_legend=1 00:30:11.922 --rc geninfo_all_blocks=1 00:30:11.922 --rc geninfo_unexecuted_blocks=1 00:30:11.922 00:30:11.922 ' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:11.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:11.922 --rc genhtml_branch_coverage=1 00:30:11.922 --rc genhtml_function_coverage=1 00:30:11.922 --rc genhtml_legend=1 00:30:11.922 --rc geninfo_all_blocks=1 00:30:11.922 --rc geninfo_unexecuted_blocks=1 00:30:11.922 00:30:11.922 ' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:11.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:11.922 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.183 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:12.183 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:12.183 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:30:12.183 11:53:37 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.323 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:20.324 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:20.324 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:20.324 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:20.324 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.324 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.324 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.486 ms 00:30:20.324 00:30:20.324 --- 10.0.0.2 ping statistics --- 00:30:20.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.324 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.324 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.324 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.277 ms 00:30:20.324 00:30:20.324 --- 10.0.0.1 ping statistics --- 00:30:20.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.324 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:20.324 ************************************ 00:30:20.324 START TEST nvmf_target_disconnect_tc1 00:30:20.324 ************************************ 00:30:20.324 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:20.325 11:53:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.325 [2024-11-15 11:53:45.094075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.325 [2024-11-15 11:53:45.094174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d47ad0 with addr=10.0.0.2, port=4420 00:30:20.325 [2024-11-15 11:53:45.094209] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:20.325 [2024-11-15 11:53:45.094227] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:20.325 [2024-11-15 11:53:45.094236] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:20.325 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:20.325 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:20.325 Initializing NVMe Controllers 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:20.325 00:30:20.325 real 0m0.144s 00:30:20.325 user 0m0.065s 00:30:20.325 sys 0m0.079s 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:20.325 ************************************ 00:30:20.325 END TEST nvmf_target_disconnect_tc1 00:30:20.325 ************************************ 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:20.325 ************************************ 00:30:20.325 START TEST nvmf_target_disconnect_tc2 00:30:20.325 ************************************ 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1264190 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1264190 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1264190 ']' 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:20.325 11:53:45 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.325 [2024-11-15 11:53:45.262557] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:30:20.325 [2024-11-15 11:53:45.262648] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.325 [2024-11-15 11:53:45.363901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.325 [2024-11-15 11:53:45.416011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.325 [2024-11-15 11:53:45.416063] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.325 [2024-11-15 11:53:45.416072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.325 [2024-11-15 11:53:45.416079] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.325 [2024-11-15 11:53:45.416086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.325 [2024-11-15 11:53:45.418550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:20.325 [2024-11-15 11:53:45.418708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:20.325 [2024-11-15 11:53:45.418976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:20.325 [2024-11-15 11:53:45.418980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.897 Malloc0 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.897 [2024-11-15 11:53:46.172833] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.897 [2024-11-15 11:53:46.213279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.897 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1264260 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:20.898 11:53:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.815 11:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1264190 00:30:22.815 11:53:48 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Read completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 Write completed with error (sct=0, sc=8) 00:30:22.815 starting I/O failed 00:30:22.815 [2024-11-15 11:53:48.251673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.815 [2024-11-15 11:53:48.252105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.252136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.252474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.252488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.252843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.252908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.253258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.253272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.253611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.253625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.253975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.253987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.254315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.254328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.254820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.254886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.255220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.255236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.255762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.255829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.256159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.256172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.256521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.256535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.256884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.256900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.257251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.257262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.257589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.815 [2024-11-15 11:53:48.257602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.815 qpair failed and we were unable to recover it. 00:30:22.815 [2024-11-15 11:53:48.257921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.258259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.258270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.258390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.258401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.258720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.258730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.259073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.259086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.259368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.259379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.259719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.259731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.260078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.260089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.260380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.260391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.260705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.260717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.260931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.260942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.261268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.261278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.261592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.261605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.261761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.261781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.262132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.262143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.262583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.262594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.262947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.262957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.263299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.263313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.263500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.263512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.263857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.263869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.264202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.264212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.264442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.264672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.264684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.265062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.265072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.265293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.265303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.265659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.265671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.265994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.266006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.266205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.266217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.266498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.266510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.266726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.266738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.267060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.267070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.267383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.267394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.267738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.267749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.268076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.268086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.268420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.268431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.268753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.268766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.269135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.269146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.269444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.269454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.816 qpair failed and we were unable to recover it. 00:30:22.816 [2024-11-15 11:53:48.269689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.816 [2024-11-15 11:53:48.269700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.270047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.270058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.270289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.270300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.270488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.270499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.270817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.270830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.271117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.271129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.271464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.271474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.271642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.271652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.272015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.272025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.272349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.272360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.272687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.272697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.272971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.272981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.273193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.273205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.273520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.273531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.273836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.273847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.274206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.274216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.274516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.274526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.274628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.274640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.275006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.275017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.275325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.275337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.275554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.275586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.275873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.275885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.276202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.276213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.276503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.276515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.276860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.276872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.277192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.277204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.277507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.277517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.277837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.277848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.278175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.278186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.278402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.278412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.278738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.278749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.279046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.279057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.279369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.279378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.279638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.279649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.279969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.279980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.280304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.280318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.280689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.280704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.281009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.281022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.281345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.817 [2024-11-15 11:53:48.281358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.817 qpair failed and we were unable to recover it. 00:30:22.817 [2024-11-15 11:53:48.281631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.281644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.281958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.281970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.282274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.282289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.282616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.282630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.282961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.282974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.283293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.283305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.283494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.283508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.283775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.283788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.284107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.284123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.284420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.284434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.284765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.284779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.285077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.285090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.285430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.285443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.285785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.285800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.286034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.286048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.286277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.286289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.286591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.286604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.286858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.286872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.287198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.287212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.287535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.287549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.287918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.287932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.288290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.288304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.288629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.288642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.288959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.288973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.289294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.289307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.289619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.289634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.289973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.289986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.290176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.290193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.290514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.290530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.290894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.290912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.291254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.291271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.291616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.291636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.291968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.291985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.292289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.292306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.292592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.292610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.292849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.292865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.293234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.293252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.293596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.293615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.818 [2024-11-15 11:53:48.293946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.818 [2024-11-15 11:53:48.293962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.818 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.294280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.294298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.294618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.294635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.294955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.294973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.295303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.295320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.295637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.295654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.295874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.295893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.296100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.296119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.296500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.296517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.296776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.296794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.297137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.297154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.297498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.297516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.297854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.297871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.298198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.298216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.298550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.298573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.298889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.298906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.299105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.299122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.299454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.299471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.299720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.299737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.300081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.300097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.300387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.300405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.300647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.300665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.301018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.301035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.301368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.301385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.301720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.301737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.302064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.302081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.302407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.302425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.302810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.302833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.303150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.303170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.303495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.303516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.819 qpair failed and we were unable to recover it. 00:30:22.819 [2024-11-15 11:53:48.303876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.819 [2024-11-15 11:53:48.303898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.304235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.304256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.304583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.304605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.304935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.304957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.305333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.305354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.305748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.305771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.306110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.306131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.306467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.306487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.306810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.306837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.307166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.307187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:22.820 [2024-11-15 11:53:48.307533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.820 [2024-11-15 11:53:48.307555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:22.820 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.309220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.309276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.309622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.309650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.310019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.310041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.310294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.310316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.310665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.310686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.311089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.311112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.311361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.311383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.311695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.311717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.312076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.312097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.312412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.312433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.312785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.312806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.313189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.313210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.313539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.313589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.313897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.313919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.314260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.090 [2024-11-15 11:53:48.314284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.090 qpair failed and we were unable to recover it. 00:30:23.090 [2024-11-15 11:53:48.314497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.314518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.314864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.314886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.315134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.315158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.315525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.315553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.315912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.315941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.316311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.316340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.316715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.316746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.317181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.317208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.317536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.317597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.317997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.318026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.318381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.318411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.318577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.318610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.319042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.319071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.319440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.319811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.319840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.320199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.320227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.320597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.320627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.320989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.321018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.321380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.321408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.321796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.321825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.322184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.322212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.322587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.322617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.323023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.323053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.323422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.323458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.323722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.323751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.324145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.324173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.324534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.324578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.324933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.325314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.325343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.325718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.325749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.325981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.091 [2024-11-15 11:53:48.326013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.091 qpair failed and we were unable to recover it. 00:30:23.091 [2024-11-15 11:53:48.326280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.326307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.326677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.326707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.327056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.327083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.327376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.327406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.327773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.327803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.328162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.328191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.328512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.328540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.328903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.328933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.329304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.329333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.329659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.329689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.330040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.330068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.330430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.330458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.330808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.330837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.331216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.331245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.331603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.331634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.332015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.332045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.332406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.332434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.332796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.332824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.333193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.333222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.333592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.333627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.333865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.333896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.334268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.334297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.334672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.334702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.335122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.335150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.335510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.335544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.335921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.335950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.336306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.336335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.336720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.336750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.337096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.337124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.337499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.337528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.337905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.337934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.338296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.338326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.338702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.338732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.339080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.339110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.339471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.339499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.339825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.339863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.340203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.340231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.340597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.340627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.340893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.340925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.341313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.341342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.341585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.341615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.342002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.342030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.342356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.342385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.342739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.092 [2024-11-15 11:53:48.342770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.092 qpair failed and we were unable to recover it. 00:30:23.092 [2024-11-15 11:53:48.343148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.343178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.343511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.343540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.343929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.343958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.344362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.344390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.344753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.344783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.345024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.345053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.345388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.345417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.345829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.345859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.346210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.346239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.346582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.346611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.346951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.346980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.347347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.347377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.347746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.347777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.348112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.348140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.348504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.348532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.348907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.348938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.349229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.349264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.349645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.349675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.349930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.349960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.350409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.350438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.350786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.350815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.351143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.351172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.351500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.351530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.351906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.351935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.352329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.352359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.352730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.352760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.353126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.353155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.353491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.353521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.353901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.353932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.354290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.354321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.354681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.354712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.355096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.355126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.355480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.355509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.355876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.355907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.356242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.356272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.356627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.356659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.356913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.356943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.357331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.357360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.357718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.357748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.358089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.358118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.358485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.358514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.358865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.358896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.359140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.359169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.359515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.093 [2024-11-15 11:53:48.359552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.093 qpair failed and we were unable to recover it. 00:30:23.093 [2024-11-15 11:53:48.359907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.359938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.360295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.360324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.360693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.360723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.361086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.361116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.361444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.361474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.361803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.361834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.362094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.362126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.362350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.362383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.362710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.362740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.363104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.363133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.363502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.363532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.363877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.363906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.364263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.364292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.364652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.364684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.365061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.365089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.365453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.365482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.365775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.365804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.366153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.366181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.366535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.366571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.366889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.366917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.367169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.367198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.367577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.367606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.367955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.367984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.368306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.368334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.368712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.368742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.369074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.369101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.369352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.369383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.369744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.369775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.370144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.370173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.370496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.370525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.370914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.370944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.371178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.371208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.371583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.371614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.372016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.372044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.372478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.372507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.372876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.372908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.373252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.373280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.094 qpair failed and we were unable to recover it. 00:30:23.094 [2024-11-15 11:53:48.373644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.094 [2024-11-15 11:53:48.373675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.374048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.374076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.374442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.374471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.374818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.374853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.375198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.375227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.375590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.375621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.375992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.376020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.376348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.376377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.376733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.376763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.377074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.377102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.377416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.377444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.377788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.377819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.378228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.378256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.378625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.378655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.379023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.379054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.379438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.379465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.379734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.379764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.380131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.380160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.380517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.380546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.380931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.380960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.381329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.381358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.381751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.381781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.382124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.382152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.382500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.382527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.382919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.382950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.383279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.383310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.383551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.383595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.384034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.384062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.384416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.384451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.384792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.384820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.385188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.385229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.385559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.385600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.385954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.385983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.386239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.386267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.386631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.386662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.386988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.095 [2024-11-15 11:53:48.387016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.095 qpair failed and we were unable to recover it. 00:30:23.095 [2024-11-15 11:53:48.387391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.387419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.387786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.387816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.388174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.388204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.388539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.388580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.388861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.388889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.389236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.389263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.389594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.389624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.389970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.389998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.390329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.390360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.390505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.390535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.390941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.390971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.391289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.391320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.391689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.391720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.392083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.392112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.392478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.392507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.392870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.392901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.393219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.393248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.393494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.393525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.393903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.393933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.394171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.394202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.394585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.394615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.394876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.394906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.395284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.395313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.395682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.395711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.396058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.396086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.396411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.396440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.396700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.396730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.397081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.397110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.397345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.397377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.397747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.397778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.398148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.398176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.398535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.398572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.398903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.398932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.399293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.399322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.399701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.399730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.400134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.400169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.400497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.400525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.400911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.400941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.401299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.096 [2024-11-15 11:53:48.401329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.096 qpair failed and we were unable to recover it. 00:30:23.096 [2024-11-15 11:53:48.401702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.401732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.402094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.402122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.402484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.402513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.402847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.402877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.403251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.403280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.403637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.403665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.404033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.404062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.404380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.404407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.404636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.404669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.405052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.405082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.405453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.405481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.405808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.405837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.406078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.406108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.406494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.406522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.406881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.406911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.407175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.407203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.407576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.407606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.407970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.407999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.408343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.408372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.408744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.408774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.409133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.409162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.409460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.409489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.409881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.409911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.410245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.410274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.410629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.410659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.410913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.410941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.411319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.411347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.411701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.411732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.412090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.412445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.412473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.412871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.412902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.413227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.413258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.413589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.413620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.414033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.414062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.414422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.414450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.414721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.414750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.415135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.415163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.415527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.415556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.097 [2024-11-15 11:53:48.415927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.097 [2024-11-15 11:53:48.415957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.097 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.416289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.416704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.416735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.417109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.417139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.417503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.417533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.417969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.417999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.418364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.418392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.418739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.418769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.419093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.419123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.419490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.419519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.419782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.419811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.420127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.420155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.420517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.420546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.420931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.420961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.421347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.421375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.421640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.421669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.422041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.422071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.422418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.422455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.422798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.422828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.423069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.423101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.423423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.423451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.423797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.423825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.424192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.424222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.424595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.424626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.424938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.424967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.425323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.425351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.425730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.425765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.426120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.426148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.426516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.426544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.426851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.426880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.427256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.427283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.427643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.427675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.428044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.428073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.428434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.428463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.428806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.428836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.429216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.429244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.429667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.429697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.429944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.429972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.098 qpair failed and we were unable to recover it. 00:30:23.098 [2024-11-15 11:53:48.430339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.098 [2024-11-15 11:53:48.430368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.430713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.430742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.431186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.431216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.431580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.431611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.431985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.432014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.432387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.432415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.432773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.432804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.433032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.433061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.433407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.433437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.433787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.433817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.434183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.434211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.434596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.434626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.434887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.434915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.435212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.435241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.435612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.435642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.436034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.436061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.436436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.436472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.436811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.436841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.437220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.437250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.437609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.437638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.438016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.438045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.438314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.438342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.438719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.438750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.439123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.439151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.439477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.439507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.439845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.439878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.440204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.440232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.440577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.440607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.440997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.441026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.441385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.441419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.441735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.441765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.442142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.442171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.442416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.442444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.442760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.442789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.443159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.099 [2024-11-15 11:53:48.443188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.099 qpair failed and we were unable to recover it. 00:30:23.099 [2024-11-15 11:53:48.443527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.443555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.443938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.443966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.444330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.444359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.444753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.444783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.445160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.445188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.445619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.445650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.445994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.446021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.446342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.446373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.446750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.446780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.447115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.447142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.447511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.447540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.447904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.447933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.448275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.448303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.448673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.448702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.449053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.449084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.449410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.449439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.449754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.449783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.450151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.450181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.450389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.450417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.450741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.450770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.451031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.451059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.451420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.451455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.451777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.451805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.452142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.452170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.452549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.452588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.452892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.452920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.453284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.453313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.453687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.453718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.454084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.454112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.454461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.454489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.454840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.454869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.455237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.455266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.455627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.455659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.455920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.455949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.456302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.456329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.456690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.456720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.457085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.457113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.457486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.457514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.100 [2024-11-15 11:53:48.457811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.100 [2024-11-15 11:53:48.457842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.100 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.458215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.458244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.458627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.458658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.458991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.459019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.459384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.459413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.459760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.459789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.460183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.460213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.460577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.460608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.460969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.460997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.461292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.461321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.461652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.461681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.462042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.462072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.462429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.462458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.462861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.462892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.463212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.463240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.463483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.463512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.463874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.463903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.464287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.464316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.464745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.464776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.465109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.465137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.465375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.465402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.465795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.465825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.466183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.466212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.466556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.466596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.466990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.467026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.467274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.467307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.467637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.467667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.468020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.468056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.468382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.468410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.468779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.468809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.469186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.469216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.469584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.469616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.469898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.469927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.470271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.470301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.470671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.470701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.470963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.470990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.471350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.471379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.471744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.471776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.472115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.101 [2024-11-15 11:53:48.472144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.101 qpair failed and we were unable to recover it. 00:30:23.101 [2024-11-15 11:53:48.472397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.472425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.472783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.472813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.473177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.473205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.473572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.473602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.473949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.473978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.474361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.474389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.474764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.474802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.475129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.475157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.475397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.475425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.475795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.475825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.476212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.476240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.476606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.476637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.476801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.476839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.477166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.477194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.477589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.477619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.477973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.478004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.478367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.478396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.478639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.478669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.479034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.479062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.479420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.479448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.479828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.479858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.480202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.480233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.480580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.480610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.480951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.480979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.481325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.481354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.481664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.481692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.482045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.482074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.482404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.482441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.482792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.482822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.483159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.483188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.483530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.483558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.483930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.483959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.484284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.484311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.484677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.484709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.484974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.485002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.485386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.485415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.485742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.485772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.486112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.486141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.102 qpair failed and we were unable to recover it. 00:30:23.102 [2024-11-15 11:53:48.486413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.102 [2024-11-15 11:53:48.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.486822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.486850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.487208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.487624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.487654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.488036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.488064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.488400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.488429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.488797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.488826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.489190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.489219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.489599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.489629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.489982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.490010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.490378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.490407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.490787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.490817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.491194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.491224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.491581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.491611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.491987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.492016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.492391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.492425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.492845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.492874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.493199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.493227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.493602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.493633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.494016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.494046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.494380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.494748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.494778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.495134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.495162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.495505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.495533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.495913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.495944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.496318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.496347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.496734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.496764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.497126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.497154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.497454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.497481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.497820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.497850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.498231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.498260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.498625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.498655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.498989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.499018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.499336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.499364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.499728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.499758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.500100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.500129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.500480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.500510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.500898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.500927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.501300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.103 [2024-11-15 11:53:48.501327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.103 qpair failed and we were unable to recover it. 00:30:23.103 [2024-11-15 11:53:48.501672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.501702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.501956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.501984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.502358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.502385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.502735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.502772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.503128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.503161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.503486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.503516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.503935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.503969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.504285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.504313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.504553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.504593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.504950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.504981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.505379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.505412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.505776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.505809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.506075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.506104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.506511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.506543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.506932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.506964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.507353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.507381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.507764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.507795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.508163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.508192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.508421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.508450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.508837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.508868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.509121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.509155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.509532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.509569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.509959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.509989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.510215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.510247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.510634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.511056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.511086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.511425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.511454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.511760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.511790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.512212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.512241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.512611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.512641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.513026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.513055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.513400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.513428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.513781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.513812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.514186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.514220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.104 [2024-11-15 11:53:48.514619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.104 [2024-11-15 11:53:48.514649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.104 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.515022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.515052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.515388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.515418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.515775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.515806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.516173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.516202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.516582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.516612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.516987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.517020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.517395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.517426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.517765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.517796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.518157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.518186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.518505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.518540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.519008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.519040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.519316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.519347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.519694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.519724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.520107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.520135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.520503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.520532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Write completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 Read completed with error (sct=0, sc=8) 00:30:23.105 starting I/O failed 00:30:23.105 [2024-11-15 11:53:48.521085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:23.105 [2024-11-15 11:53:48.521377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.521413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.521766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.521787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.522104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.522119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.522461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.522478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.522897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.522916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.523250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.523265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.523624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.523660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.524000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.524016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.524231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.524246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.524490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.524508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.524747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.524764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.525095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.105 [2024-11-15 11:53:48.525110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.105 qpair failed and we were unable to recover it. 00:30:23.105 [2024-11-15 11:53:48.525443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.525459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.525816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.525832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.526199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.526219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.526586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.526604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.526918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.526933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.527281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.527297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.527504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.527519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.527820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.527838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.528162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.528179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.528527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.528542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.528884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.528902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.529242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.529258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.529494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.529509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.529818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.529834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.530176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.530192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.530412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.530427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.530800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.530817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.531145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.531161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.531502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.531519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.531953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.531970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.532314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.532329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.532698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.532715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.533013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.533029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.533353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.533369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.533617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.533634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.533960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.533977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.534306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.534324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.534678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.534695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.535042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.535056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.535408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.535426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.535657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.535675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.536008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.536025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.536356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.536371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.536706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.536724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.537066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.537081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.537457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.537472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.106 [2024-11-15 11:53:48.537813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.106 [2024-11-15 11:53:48.537828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.106 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.538182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.538199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.538405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.538421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.538773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.538791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.539115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.539130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.539470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.539486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.539857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.539877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.540217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.540233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.540588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.540606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.540922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.540940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.541317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.541333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.541655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.541673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.541995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.542012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.542346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.542362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.542754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.542771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.542999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.543015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.543378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.543394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.543727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.543742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.544086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.544100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.544436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.544454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.544793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.544810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.545154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.545175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.545527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.545541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.545960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.545977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.546174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.546192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.546550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.546578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.546932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.546948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.547298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.547337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.547727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.547752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.548189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.548205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.548554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.548592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.548906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.548922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.549270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.549301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.549636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.549659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.549972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.549990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.550334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.550348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.550697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.550714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.551073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.551097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.107 [2024-11-15 11:53:48.551470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.107 [2024-11-15 11:53:48.551500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.107 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.551878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.551896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.552282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.552300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.552536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.552553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.552976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.553339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.553696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.553733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.554085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.554106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.554332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.554354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.554680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.554697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.554898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.554915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.555274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.555289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.555477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.555494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.555832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.555850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.555969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.555985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Read completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 Write completed with error (sct=0, sc=8) 00:30:23.108 starting I/O failed 00:30:23.108 [2024-11-15 11:53:48.556807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:23.108 [2024-11-15 11:53:48.557415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.557526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.558031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.558071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.558434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.558467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.558862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.558966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.559428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.559466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.559742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.559774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.560190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.560221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.560580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.560612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.108 [2024-11-15 11:53:48.560952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.108 [2024-11-15 11:53:48.560982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.108 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.561358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.561387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.561644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.561676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.562039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.562066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.562444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.562474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.562836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.562880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.563199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.563227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.563608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.563640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.564011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.564039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.564429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.564457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.564708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.564741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.565118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.565147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.565494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.565525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.565867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.565897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.566286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.566315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.566695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.566726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.567100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.567129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.567473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.567502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.567772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.567806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.568174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.568206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.568544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.568582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.568932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.568964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.569327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.569356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.569628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.569659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.569971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.569999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.570361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.570390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.570707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.570739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.571155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.571184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.571554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.571592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.571965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.571993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.572261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.572290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.572647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.572676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.573044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.573078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.573462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.573490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.573895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.573927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.574257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.574285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.574641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.574671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.575078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.575108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.575443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.575472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.109 qpair failed and we were unable to recover it. 00:30:23.109 [2024-11-15 11:53:48.575864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.109 [2024-11-15 11:53:48.575896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.110 qpair failed and we were unable to recover it. 00:30:23.110 [2024-11-15 11:53:48.576251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.110 [2024-11-15 11:53:48.576282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.110 qpair failed and we were unable to recover it. 00:30:23.110 [2024-11-15 11:53:48.576639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.110 [2024-11-15 11:53:48.576669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.110 qpair failed and we were unable to recover it. 00:30:23.110 [2024-11-15 11:53:48.577040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.110 [2024-11-15 11:53:48.577069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.110 qpair failed and we were unable to recover it. 00:30:23.110 [2024-11-15 11:53:48.577315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.110 [2024-11-15 11:53:48.577348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.110 qpair failed and we were unable to recover it. 00:30:23.110 [2024-11-15 11:53:48.577727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.110 [2024-11-15 11:53:48.577756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.110 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.578148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.578182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.578612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.578645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.579035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.579065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.579394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.579424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.579879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.579909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.580083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.580115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.580449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.580480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.580828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.580861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.581113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.581141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.380 qpair failed and we were unable to recover it. 00:30:23.380 [2024-11-15 11:53:48.581534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.380 [2024-11-15 11:53:48.581573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.581960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.581991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.582368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.582399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.582740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.582770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.583064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.583095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.583337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.583366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.583746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.583779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.584137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.584168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.584583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.584616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.585005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.585034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.585393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.585424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.585617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.585649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.586034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.586063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.586429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.586460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.586656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.586691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.586924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.586956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.587352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.587618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.587649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.588034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.588064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.588440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.588479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.588892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.588923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.589279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.589312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.589690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.589720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.590060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.590091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.590464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.590494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.590845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.381 [2024-11-15 11:53:48.590876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.381 qpair failed and we were unable to recover it. 00:30:23.381 [2024-11-15 11:53:48.591253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.591284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.591693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.591723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.591984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.592014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.592326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.592356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.592732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.592764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.593177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.593207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.593531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.593559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.593949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.593978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.594351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.594383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.594639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.594670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.594923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.594951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.595302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.595330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.595688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.595720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.596055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.596084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.596432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.596464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.596797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.596830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.597197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.597228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.597630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.597660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.597902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.597934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.598334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.598363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.598615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.598644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.599023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.599052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.599448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.599480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.599795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.599826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.382 [2024-11-15 11:53:48.600099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.382 [2024-11-15 11:53:48.600129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.382 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.600488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.600518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.600797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.600829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.601209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.601239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.601680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.601711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.601961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.601991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.602354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.602383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.602733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.602771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.603014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.603043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.603381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.603411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.603687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.603718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.604084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.604113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.604482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.604514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.604759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.604789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.605132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.605160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.605541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.605580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.605853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.605882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.606264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.606292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.606647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.606677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.607023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.607052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.607411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.607439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.607801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.607832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.608210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.608238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.608593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.608623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.609007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.609036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.609396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.383 [2024-11-15 11:53:48.609424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.383 qpair failed and we were unable to recover it. 00:30:23.383 [2024-11-15 11:53:48.609789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.609819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.610169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.610199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.610542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.610583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.610873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.610902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.611242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.611272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.611603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.611634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.611975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.612011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.612403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.612433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.612672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.612703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.613021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.613050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.613405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.613440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.613872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.614269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.614297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.614643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.614674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.615028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.615057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.615408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.615438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.615819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.615849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.616218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.616246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.616617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.616647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.617009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.617039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.617405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.617434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.617803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.617833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.618167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.618198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.618533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.618570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.618981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.619009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.619378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.384 [2024-11-15 11:53:48.619410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.384 qpair failed and we were unable to recover it. 00:30:23.384 [2024-11-15 11:53:48.619746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.619776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.620097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.620375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.620405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.620826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.620856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.621213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.621241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.621636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.621666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.622048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.622077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.622420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.622448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.622779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.622808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.623184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.623216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.623582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.623613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.623950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.623980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.624322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.624350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.624611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.624640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.625029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.625059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.625353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.625382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.625739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.625769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.626134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.626164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.626601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.626632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.627019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.627047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.627408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.627437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.627788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.627817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.628162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.628193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.628486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.628515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.628872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.628902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.629301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.629330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.629697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.629733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.630111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.385 [2024-11-15 11:53:48.630139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.385 qpair failed and we were unable to recover it. 00:30:23.385 [2024-11-15 11:53:48.630514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.630543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.630916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.630946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.631308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.631336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.631727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.631757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.632091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.632120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.632458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.632489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.632829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.632859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.633234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.633262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.633631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.633661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.634010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.634039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.634390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.634420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.634750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.634780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.635161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.635191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.635537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.635589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.635941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.635971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.636208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.636236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.636611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.636641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.637016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.637047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.637403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.637434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.637791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.637821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.638181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.638210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.638590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.638619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.638910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.638939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.639288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.639317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.639679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.639710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.640079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.640113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.640456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.640486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.640762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.640792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.641150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.641178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.641517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.641548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.641906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.641936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.642267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.642297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.386 qpair failed and we were unable to recover it. 00:30:23.386 [2024-11-15 11:53:48.642545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.386 [2024-11-15 11:53:48.642582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.642982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.643011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.643376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.643404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.643748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.643778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.644211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.644240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.644585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.644615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.644972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.645003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.645366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.645396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.645746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.645777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.646139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.646168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.646529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.646558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.646913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.646942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.647315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.647344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.647688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.647718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.648088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.648117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.648483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.648512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.648767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.648801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.649196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.649224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.649599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.649630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.649999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.650027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.650405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.650435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.650792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.650822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.651258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.651287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.651613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.651644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.652008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.652037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.652284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.652313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.652680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.652709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.652952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.652981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.653337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.653366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.653702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.653732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.653990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.654022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.654334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.654371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.654705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.654734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.655070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.655099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.655436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.655478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.655799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.655829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.387 qpair failed and we were unable to recover it. 00:30:23.387 [2024-11-15 11:53:48.656047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.387 [2024-11-15 11:53:48.656076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.656419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.656448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.656682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.656711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.657084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.657112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.657362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.657391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.657659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.657689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.658048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.658076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.658428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.658456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.658799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.658830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.659240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.659268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.659645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.659676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.660044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.660074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.660430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.660459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.660799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.660828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.661082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.661113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.661426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.661455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.661845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.661876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.662236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.662638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.662667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.663042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.663070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.663433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.663461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.663717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.663749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.663988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.664017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.664386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.664415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.664777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.664809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.665145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.665181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.665519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.665548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.665853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.665882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.666248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.666277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.666616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.666646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.667007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.667036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.667363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.667393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.667750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.667779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.668160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.668190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.668556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.668594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.668911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.668940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.669308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.669337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.669661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.669691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.388 [2024-11-15 11:53:48.670035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.388 [2024-11-15 11:53:48.670063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.388 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.670408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.670437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.670795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.670826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.671198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.671227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.671540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.671592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.671917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.671946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.672191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.672218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.672538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.672574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.672935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.672965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.673200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.673230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.673590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.673620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.673963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.673991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.674242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.674270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.674673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.674703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.675073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.675101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.675446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.675476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.675819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.675849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.676214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.676244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.676593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.676622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.676959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.677223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.677255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.677535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.677578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.677966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.677996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.678329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.678359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.678719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.678750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.679088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.679115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.679454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.679483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.679816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.679847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.680210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.680245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.680585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.680616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.680955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.680984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.681311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.681339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.681693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.681723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.682109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.682138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.682498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.682529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.682877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.682907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.683257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.683287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.683664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.683694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.389 qpair failed and we were unable to recover it. 00:30:23.389 [2024-11-15 11:53:48.684084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.389 [2024-11-15 11:53:48.684114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.684477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.684507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.684876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.684906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.685298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.685328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.685685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.685714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.686081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.686110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.686479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.686507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.686905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.686935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.687293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.687328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.687658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.687690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.688088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.688119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.688492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.688521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.688915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.688947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.689242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.689271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.689639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.689670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.690037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.690067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.690462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.690490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.690723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.690762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.691114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.691144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.691513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.691542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.691910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.691939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.692271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.692300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.692674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.692705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.693082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.693111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.693494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.693834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.693864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.694221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.694250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.694581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.694612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.694965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.694993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.695349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.695379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.695757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.695788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.696159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.696188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.696547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.696587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.696967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.697039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.697402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.697431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.697686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.697716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.698110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.698139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.698505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.390 [2024-11-15 11:53:48.698534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.390 qpair failed and we were unable to recover it. 00:30:23.390 [2024-11-15 11:53:48.698903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.698932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.699191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.699219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.699620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.699653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.700027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.700055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.700422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.700451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.700783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.700814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.701190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.701219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.701456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.701484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.701856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.701886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.702258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.702289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.702639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.702669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.703040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.703070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.703426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.703456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.703781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.703813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.704135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.704163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.704525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.704555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.704949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.704979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.705280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.705308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.705661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.705692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.705977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.706005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.706377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.706411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.706770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.706801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.707158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.707188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.707548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.707597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.707920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.707949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.708319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.708347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.708711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.708741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.709109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.709137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.709474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.709504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.709854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.709884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.710244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.710274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.710638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.710668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.710999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.711036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.711369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.391 [2024-11-15 11:53:48.711399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.391 qpair failed and we were unable to recover it. 00:30:23.391 [2024-11-15 11:53:48.711740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.711771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.712137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.712165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.712520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.712550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.712923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.712952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.713312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.713342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.713704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.713734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.714105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.714133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.714462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.714491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.714835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.714864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.715222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.715251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.715507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.715535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.715902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.715933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.716266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.716295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.716607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.716637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.717007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.717035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.717411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.717439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.717804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.717834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.718203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.718232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.718591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.718624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.719032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.719060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.719406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.719436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.719785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.719816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.720180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.720212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.720437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.720468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.720830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.720859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.721212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.721241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.721617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.721648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.722028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.722058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.722418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.722448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.722822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.722852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.723198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.723226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.723607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.723637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.723985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.724013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.724277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.724304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.724643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.724673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.725046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.725075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.725428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.725457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.725822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.725852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.392 [2024-11-15 11:53:48.726216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.392 [2024-11-15 11:53:48.726245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.392 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.726620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.726649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.727077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.727107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.727456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.727487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.727840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.727869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.728249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.728277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.728702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.728734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.729101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.729129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.729494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.729524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.729879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.729910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.730276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.730304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.730667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.730697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.731070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.731098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.731416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.731455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.731810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.731840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.732200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.732230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.732591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.732631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.733010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.733038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.733423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.733452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.733793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.733824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.734192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.734221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.734607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.734639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.734995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.735023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.735391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.735420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.735749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.735778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.736106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.736135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.736503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.736531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.736866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.736896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.737313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.737343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.737712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.737741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.738137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.738165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.738539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.738579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.738936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.738967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.739301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.739329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.739667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.739697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.740061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.740090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.740350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.740380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.740739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.740769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.741132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.393 [2024-11-15 11:53:48.741160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.393 qpair failed and we were unable to recover it. 00:30:23.393 [2024-11-15 11:53:48.741524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.741553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.741913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.741942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.742323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.742352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.742718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.742747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.742956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.742987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.743418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.743448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.743786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.743817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.744230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.744258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.744621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.744650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.744904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.744934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.745278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.745308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.745656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.745687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.746073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.746102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.746468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.746496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.746816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.746845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.747195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.747224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.747482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.747511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.747880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.747912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.748294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.748323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.748657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.748688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.749059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.749088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.749443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.749473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.749706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.749736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.750082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.750111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.750477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.750506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.750869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.750899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.751265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.751294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.751547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.751585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.751942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.751972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.752149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.752178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.752530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.752560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.752949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.753348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.753380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.753780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.753812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.754163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.754193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.754425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.754456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.754876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.754909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.755161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.755189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.394 qpair failed and we were unable to recover it. 00:30:23.394 [2024-11-15 11:53:48.755553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.394 [2024-11-15 11:53:48.755590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.755936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.755966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.756329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.756360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.758337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.758404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.758791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.758829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.759105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.759138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.759525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.759555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.759932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.759970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.760226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.760255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.760592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.760630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.760976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.761004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.761403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.761435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.761806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.761836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.762185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.762213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.762587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.762619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.762993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.763021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.763394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.763424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.763738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.763769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.764122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.764153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.764532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.764560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.764950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.764978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.765340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.765370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.765743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.765775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.766198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.766226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.766597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.766628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.767001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.767029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.767389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.767420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.767810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.767840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.768229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.768258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.768611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.768641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.769020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.769049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.769405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.769433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.769780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.769810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.770178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.770209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.770578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.770608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.770995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.771025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.771404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.771435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.771797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.771827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.395 [2024-11-15 11:53:48.772165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.395 [2024-11-15 11:53:48.772193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.395 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.772603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.772636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.772931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.772960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.773315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.773344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.773715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.773747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.774144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.774172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.774554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.774595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.774862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.774893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.775266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.775295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.775653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.775686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.776027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.776061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.776411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.776443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.776837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.776867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.777207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.777236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.777497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.777526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.777915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.777945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.778200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.778229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.778590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.778619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.778948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.778978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.779330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.779359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.779600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.779629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.780037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.780066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.780360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.780387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.780795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.780826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.781212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.781244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.781504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.781538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.781818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.781849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.782217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.782245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.782625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.782656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.783000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.783030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.783333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.783361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.783717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.783749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.784123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.784154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.784542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.784579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.784932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.784963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.396 qpair failed and we were unable to recover it. 00:30:23.396 [2024-11-15 11:53:48.785317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.396 [2024-11-15 11:53:48.785347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.785587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.785615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.785980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.786017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.786342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.786372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.786666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.786695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.787046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.787076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.787479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.787509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.787885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.787915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.788286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.788316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.788605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.788637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.789014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.789043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.789267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.789300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.789726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.789757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.789923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.789953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.790327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.790354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.790730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.790761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.791079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.791109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.791465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.791495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.791684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.791714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.791953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.791982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.792379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.792408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.792747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.792777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.793126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.793154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.793534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.793572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.793916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.793944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.794367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.794398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.794753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.794782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.795148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.795178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.795521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.795553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.795957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.795986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.796370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.796400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.796752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.796784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.797143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.797173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.797572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.797602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.797960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.797990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.798330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.798362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.798705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.798737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.799136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.799166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.397 [2024-11-15 11:53:48.799527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.397 [2024-11-15 11:53:48.799555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.397 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.800008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.800039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.800405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.800435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.800777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.800807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.801153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.801181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.801585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.801622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.801992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.802020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.802453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.802482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.802888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.802920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.803256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.803286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.803519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.803548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.803984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.804013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.804389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.804418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.804783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.804813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.804994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.805024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.805373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.805403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.805744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.805774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.806115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.806145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.806437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.806466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.806817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.806848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.807293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.807325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.807604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.807633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.808060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.808089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.808465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.808494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.808790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.808819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.809083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.809115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.809480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.809509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.809807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.809839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.810159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.810187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.398 [2024-11-15 11:53:48.810461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.398 [2024-11-15 11:53:48.810488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.398 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.810783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.810813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.811187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.811217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.811599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.811634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.812030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.812058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.812416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.812444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.812812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.812842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.813182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.813211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.813484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.813512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.813902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.813932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.814342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.814369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.814790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.814820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.815145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.815173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.815579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.815610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.815891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.815920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.816274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.816676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.816705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.817132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.817161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.817532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.817569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.817961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.817989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.818356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.818385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.818798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.818829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.819218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.819246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.819680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.819710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.820103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.820131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.399 qpair failed and we were unable to recover it. 00:30:23.399 [2024-11-15 11:53:48.820485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.399 [2024-11-15 11:53:48.820514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.820873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.820903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.821394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.821422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.821806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.821835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.822203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.822231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.822593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.822622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.822978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.823007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.823303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.823332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.823697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.823733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.824103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.824131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.824395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.824423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.824774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.824802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.825255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.825641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.825670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.826032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.826059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.826425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.826453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.826853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.826882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.827318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.827347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.827686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.827716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.828104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.828138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.828466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.828494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.828868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.828899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.829253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.829281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.829635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.829665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.830048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.830078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.830393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.400 [2024-11-15 11:53:48.830421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.400 qpair failed and we were unable to recover it. 00:30:23.400 [2024-11-15 11:53:48.830758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.830788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.831152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.831181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.831535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.831573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.831947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.831976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.832338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.832367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.832622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.832650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.833076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.833104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.833456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.833486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.833662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.833693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.834088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.834116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.834485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.834514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.834883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.834915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.835256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.835286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.835641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.835671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.836047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.836076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.836326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.836355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.836715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.836745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.837080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.837108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.837502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.837530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.837875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.837904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.838268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.838302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.838642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.838672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.839043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.839072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.839441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.839469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.401 [2024-11-15 11:53:48.839888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.401 [2024-11-15 11:53:48.839918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.401 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.840147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.840181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.840498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.840527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.840936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.840966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.841261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.841289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.841636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.841665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.842041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.842069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.842434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.842462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.842818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.842846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.843219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.843249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.843631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.843664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.844085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.844114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.844465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.844492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.844746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.844774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.845168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.845197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.845574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.845603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.846001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.846029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.846408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.846436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.846818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.846849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.847206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.847233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.847485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.847513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.402 [2024-11-15 11:53:48.847796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.402 [2024-11-15 11:53:48.847826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.402 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.848174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.848201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.848545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.848584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.848863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.848892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.849264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.849292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.849653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.849685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.850059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.850088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.850444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.850473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.850803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.850832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.851212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.851242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.851616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.851646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.851886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.851914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.852291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.852319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.852669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.852700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.853112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.853552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.853591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.853938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.853972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.854332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.854361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.854708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.854739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.855094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.855122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.855498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.855526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.855913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.855944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.856316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.856346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.856707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.856737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.857107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.857135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.857511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.857540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.857794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.857826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.858097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.403 [2024-11-15 11:53:48.858126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.403 qpair failed and we were unable to recover it. 00:30:23.403 [2024-11-15 11:53:48.858454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.858483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.858851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.858881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.859324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.859354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.859716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.859745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.860119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.860147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.860519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.860555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.860901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.860931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.861298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.861326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.861596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.861625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.861759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.861790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.862184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.862213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.862584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.862614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.862976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.863004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.863366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.863396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.863793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.863823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.864239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.864267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.864590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.864619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.864986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.865014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.865373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.865403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.865767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.865795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.866170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.866199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.866535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.404 [2024-11-15 11:53:48.866572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.404 qpair failed and we were unable to recover it. 00:30:23.404 [2024-11-15 11:53:48.866811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.867215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.867249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.867587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.867619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.870030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.870103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.870423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.870459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.870802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.870833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.871160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.871190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.871558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.871608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.871998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.872027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.872393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.872423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.872678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.872712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.873055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.873084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.873456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.873485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.873880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.874207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.874234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.874609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.874641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.875032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.875061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.875416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.875444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.875822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.875851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.876194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.876223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.876584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.876614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.876941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.876969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.877300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.877329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.676 [2024-11-15 11:53:48.877696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.676 [2024-11-15 11:53:48.877725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.676 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.878095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.878123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.878493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.878521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.878768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.878797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.879073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.879102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.879479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.879508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.879849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.879878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.880208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.880237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.880588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.880618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.880973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.881000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.881363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.881392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.881753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.881790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.882173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.882200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.882576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.882606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.883021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.883049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.883432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.883461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.883831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.883864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.884206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.884235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.884611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.884640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.885025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.885055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.885409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.885439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.885790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.885821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.886170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.886198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.886578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.886607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.886878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.886906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.887298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.887327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.887637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.887666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.888032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.888061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.888423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.888452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.888812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.888842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.889206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.889235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.889596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.889628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.889870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.889901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.890230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.890259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.890649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.890679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.890942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.890971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.891318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.891346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.891700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.891730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.892078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.892106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.892481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.892511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.892883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.892915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.893264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.893293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.893655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.893687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.894063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.894091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.894436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.894466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.894792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.894823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.895183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.895211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.895532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.895560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.895796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.895828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.896175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.896204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.896582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.896612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.896977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.897007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.897349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.897386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.897754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.897784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.898208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.898239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.898631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.898662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.899035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.899391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.899419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.899826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.899856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.677 [2024-11-15 11:53:48.900216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.677 [2024-11-15 11:53:48.900246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.677 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.900607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.900636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.900990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.901019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.901369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.901397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.901659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.901693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.902068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.902098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.902466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.902494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.902736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.902765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.903138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.903166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.903540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.903579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.903930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.903959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.904350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.904377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.904743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.904774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.905134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.905162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.905519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.905547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.905908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.905937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.906296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.906324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.906689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.906720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.907097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.907125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.907491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.907518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.907871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.907908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.908251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.908281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.908617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.908645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.909031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.909060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.909425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.909453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.909833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.909861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.910234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.910262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.910503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.910534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.910914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.910943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.911313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.911343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.911605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.911634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.912017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.912045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.912416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.912444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.912827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.912856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.913118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.913146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.913505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.913532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.913972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.914001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.914365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.914393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.914657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.914686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.915076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.915104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.915344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.915374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.915668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.915700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.916056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.916085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.916330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.916358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.916698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.916727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.917086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.917116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.917475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.917502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.917725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.917757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.918148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.918177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.918537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.918585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.918983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.919012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.919380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.919408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.919778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.919809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.920187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.920217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.920569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.920598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.920924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.920953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.921208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.921239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.921582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.678 [2024-11-15 11:53:48.921611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.678 qpair failed and we were unable to recover it. 00:30:23.678 [2024-11-15 11:53:48.921940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.921968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.922337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.922366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.922733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.922763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.923096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.923130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.923503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.923532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.923917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.923948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.924301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.924330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.924685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.924715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.925162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.925190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.925507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.925543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.925910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.925939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.926311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.926339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.926697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.926727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.926997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.927026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.927380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.927414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.927783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.927813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.928173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.928201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.928585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.928614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.928977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.929005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.929247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.929275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.929626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.929655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.930029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.930056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.930423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.930454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.930802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.930831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.931199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.931228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.931485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.931517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.931884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.931915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.932149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.932182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.932533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.932570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.932932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.932960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.933321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.933356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.933719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.933748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.934108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.934136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.934473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.934501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.934874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.934904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.935263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.935292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.935656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.935685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.936053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.936081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.936432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.936461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.936739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.936768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.937125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.937153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.937485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.937512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.937880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.937910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.938272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.938300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.938671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.938702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.939086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.939114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.939470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.939498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.939789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.939819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.940104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.940132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.940502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.940530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.940892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.940921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.679 [2024-11-15 11:53:48.941292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.679 [2024-11-15 11:53:48.941319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.679 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.941761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.941790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.942164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.942193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.942437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.942464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.942850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.942879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.943246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.943274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.943615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.943645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.944017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.944045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.944294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.944324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.944681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.944710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.945044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.945072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.945415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.945442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.945804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.945834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.946194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.946223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.946584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.946622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.946966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.946994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.947355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.947383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.947763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.947793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.948116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.948144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.948506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.948535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.948910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.948945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.949234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.949263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.949638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.949668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.950026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.950053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.950396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.950423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.950770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.950800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.951123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.951152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.951507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.951534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.951895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.951924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.952274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.952302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.952687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.952716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.953073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.953102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.953471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.953499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.953867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.953896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.954224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.954252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.954628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.954657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.955049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.955077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.955308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.955339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.955718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.955748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.956091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.956120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.956478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.956506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.956907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.956937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.957315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.957344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.957703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.957733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.958096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.958124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.958497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.958525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.958913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.958942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.959300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.959334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.959697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.959726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.960001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.960029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.960390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.960420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.960788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.960817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.961123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.961503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.961532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.961771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.961800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.962145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.962174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.962439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.962468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.962797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.962826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.680 [2024-11-15 11:53:48.963198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.680 [2024-11-15 11:53:48.963226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.680 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.963583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.963612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.963964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.963992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.964358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.964387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.964637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.964669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.965102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.965130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.965545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.965582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.965941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.965969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.966334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.966362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.966757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.966795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.967122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.967149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.967521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.967551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.967950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.967979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.968383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.968411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.968744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.968773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.968944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.968975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.969344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.969372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.969736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.969766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.970131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.970160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.970532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.970560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.970925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.970953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.971320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.971347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.971746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.971777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.972132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.972161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.972396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.972424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.972722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.972752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.973038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.973066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.973439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.973468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.973853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.973882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.974126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.974154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.974298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.974331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.974700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.974730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.975101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.975130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.975488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.975516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.975892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.975923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.976274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.976302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.976557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.976601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.976970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.976998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.977365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.977394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.977744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.977774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.978133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.978161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.978518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.978546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.978845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.978874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.979229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.979257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.979637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.979666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.980027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.980056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.980360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.980388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.980757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.980788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.981113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.981141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.981506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.981535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.981917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.981947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.982287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.982315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.982688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.982719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.983100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.983129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.983469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.983496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.681 [2024-11-15 11:53:48.983865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.681 [2024-11-15 11:53:48.983895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.681 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.984266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.984295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.984552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.984590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.984966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.984995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.985346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.985375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.985622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.985652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.986066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.986094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.986451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.986486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.986744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.986774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.987122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.987152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.987508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.987536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.987921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.987950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.988325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.988353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.988707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.988737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.989104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.989134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.989487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.989516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.989932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.989963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.990273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.990301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.990662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.990692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.991062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.991090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.991443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.991471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.991782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.991812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.992074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.992105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.992464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.992492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.992859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.992890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.993274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.993304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.993665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.993694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.994071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.994099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.994462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.994490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.994867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.994896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.995078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.995107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.995485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.995514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.995808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.995837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.996188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.996216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.996587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.996617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.997005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.997033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.997400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.997429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.997785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.997815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.998170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.998199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.998555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.998594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.998958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.998986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.999355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.999384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:48.999744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:48.999774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:49.000028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:49.000061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:49.000408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:49.000438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:49.000786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:49.000816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:49.001125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:49.001153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:49.001520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.682 [2024-11-15 11:53:49.001550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.682 qpair failed and we were unable to recover it. 00:30:23.682 [2024-11-15 11:53:49.001910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.001940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.002313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.002342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.002699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.002728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.003090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.003119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.003470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.003497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.003861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.003892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.004292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.004321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.004684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.004713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.005062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.005092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.005449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.005478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.005893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.005923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.006258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.006286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.006644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.006673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.007056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.007085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.007442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.007470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.007834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.007864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.008297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.008327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.008649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.008679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.009044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.009073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.009423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.009454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.009777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.009807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.010192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.010221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.010589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.010621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.010975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.011004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.011380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.011409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.011786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.011817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.012189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.012218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.012589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.012620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.012993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.013021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.013340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.013371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.013774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.013804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.014171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.014199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.014572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.014602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.014948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.014976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.015331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.015360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.015735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.015768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.016137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.016167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.016537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.016575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.016975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.017364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.017394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.017779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.017810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.018178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.018210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.018665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.018695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.019059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.019089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.019460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.019489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.019812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.019843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.020203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.020231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.020475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.020505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.020862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.020892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.021263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.021293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.021703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.021735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.022119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.022149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.022574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.022606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.022990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.023021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.023355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.023383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.023745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.023777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.024020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.683 [2024-11-15 11:53:49.024052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.683 qpair failed and we were unable to recover it. 00:30:23.683 [2024-11-15 11:53:49.024416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.024446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.024776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.024806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.025168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.025199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.025451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.025480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.025880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.025910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.026276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.026304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.026673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.026708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.027087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.027120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.027508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.027880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.027910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.028268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.028297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.028664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.028694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.029066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.029096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.029441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.029470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.029851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.029881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.030238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.030266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.030597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.030627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.030984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.031013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.031375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.031405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.031800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.031830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.032096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.032128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.032470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.032499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.032865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.032898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.033234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.033265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.033635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.033666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.034033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.034061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.034330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.034358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.034705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.034736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.035071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.035100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.035395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.035425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.035656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.035687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.036067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.036097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.036391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.036422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.036793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.036823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.037181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.037213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.037583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.037613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.037952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.037980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.038335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.038366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.038717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.038746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.039099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.039129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.039505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.039534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.039903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.039933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.040300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.040332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.040680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.040710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.040976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.041005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.041350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.041380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.041707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.041738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.042118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.042432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.042463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.042725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.042757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.043115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.043143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.043429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.043460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.043767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.043797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.044156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.044187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.044535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.044573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.045003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.045031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.045418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.045448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.045661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.045696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.046052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.684 [2024-11-15 11:53:49.046081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.684 qpair failed and we were unable to recover it. 00:30:23.684 [2024-11-15 11:53:49.046450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.046481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.046823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.046855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.047308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.047339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.047689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.047718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.048089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.048119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.048502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.048530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.048867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.048897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.049245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.049275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.049637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.049670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.049999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.050029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.050323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.050352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.050710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.050741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.051112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.051143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.051382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.051412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.051763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.051795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.052149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.052185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.052514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.052545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.052889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.052920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.053285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.053316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.053678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.053709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.054071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.054103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.054487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.054516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.054822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.054851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.055223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.055254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.055612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.055645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.055980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.056011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.056382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.056413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.056782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.056814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.057183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.057213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.057528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.057557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.057919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.057950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.058327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.058357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.058730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.058768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.059140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.059169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.059428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.059459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.059730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.059762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.060130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.060157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.060512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.060543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.060929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.060959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.061352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.061795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.061825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.062072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.062100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.062467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.062495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.062871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.062902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.063263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.063292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.063560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.063601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.063969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.063997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.064430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.064461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.064817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.064849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.065191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.065220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.065581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.065612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.065961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.065990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.066349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.066378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.066736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.066767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.067144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.067514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.067544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.067924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.067958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.685 [2024-11-15 11:53:49.068220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.685 [2024-11-15 11:53:49.068248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.685 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.068645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.069033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.069068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.069411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.069439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.069786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.069817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.070184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.070213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.070425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.070453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.070775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.070804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.071186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.071216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.071588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.071616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.071873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.071901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.072256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.072284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.072586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.072615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.072981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.073010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.073369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.073398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.073783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.073813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.074198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.074226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.074666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.074695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.075072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.075102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.075457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.075485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.075819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.075851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.076219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.076247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.076612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.076643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.077038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.077067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.077324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.077356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.077711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.077740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.078050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.078084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.078450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.078478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.078791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.078820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.079177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.079205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.079584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.079614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.079999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.080027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.080392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.080421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.080686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.080717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.081094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.081122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.081480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.081510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.081769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.081799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.082142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.082178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.082544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.082582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.082936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.082965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.083292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.083322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.083670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.083700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.084064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.084094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.084453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.084481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.084830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.084860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.085230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.085259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.085630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.085659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.086041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.086397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.086425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.086687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.086716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.087115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.087144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.087507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.087535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.087884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.087912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.088274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.088302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.088666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.088698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.089019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.089047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.089418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.089447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.089785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.089816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.090184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.090213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.090581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.090612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.090979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.091008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.091365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.091394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.686 qpair failed and we were unable to recover it. 00:30:23.686 [2024-11-15 11:53:49.091753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.686 [2024-11-15 11:53:49.091782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.092155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.092183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.092541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.092598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.092999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.093028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.093386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.093415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.093823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.093859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.094189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.094219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.094587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.094616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.094853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.094881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.095255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.095283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.095648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.095678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.096041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.096069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.096448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.096476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.096824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.096853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.097026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.097055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.097426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.097454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.097814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.097845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.098194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.098222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.098601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.098630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.098970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.098998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.099358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.099387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.099780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.099809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.100134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.100164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.100500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.100529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.100902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.100933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.101290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.101318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.101702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.101732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.101981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.102009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.102359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.102387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.102723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.102753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.103121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.103149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.103480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.103517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.103908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.103943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.104290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.104320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.104648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.104678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.104942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.104969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.105323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.105351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.105648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.105678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.106045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.106073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.106438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.106467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.106848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.106876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.107125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.107156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.107534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.107570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.107969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.107997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.108363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.108391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.108745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.108774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.109162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.109191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.109546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.109585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.109928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.109956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.110313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.110341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.110720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.110750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.111111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.111139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.111505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.111533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.111893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.111923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.112273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.112302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.112682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.112713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.113051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.113079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.113457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.113485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.113860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.113889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.114246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.114275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.114641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.114670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.115067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.115096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.115471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.115499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.115856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.115886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.116231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.116259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.116592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.687 [2024-11-15 11:53:49.116621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.687 qpair failed and we were unable to recover it. 00:30:23.687 [2024-11-15 11:53:49.117000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.117028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.117282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.117312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.117669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.117699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.118106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.118133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.118490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.118528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.118804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.118834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.119201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.119229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.119602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.119638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.120003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.120031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.120415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.120444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.120783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.120813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.121176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.121205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.121523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.121551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.121944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.121973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.122336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.122363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.122742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.122771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.123121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.123149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.123402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.123430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.123788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.123819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.124180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.124208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.124587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.124617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.124996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.125024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.125357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.125386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.125624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.125653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.126018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.126047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.126411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.126439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.126786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.126816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.127074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.127103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.127450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.127478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.127821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.127850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.128100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.128128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.128513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.128887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.128917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.129269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.129297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.129582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.129611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.129972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.130000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.130231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.130263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.130611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.130641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.131019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.131049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.131401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.131429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.131791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.131822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.132160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.132187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.132532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.132560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.132893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.132922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.133286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.133315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.133675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.133704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.134067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.134097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.134503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.134530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.134938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.134969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.135333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.135368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.135732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.135762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.136136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.136165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.136545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.136583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.136918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.136946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.137321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.137349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.137716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.137745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.138093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.138121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.138484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.138513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.138887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.138916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.139254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.139288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.139607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.139637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.140021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.140050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.140419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.140448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.140821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.140851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.141179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.141209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.141442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.141470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.141809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.141838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.142085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.142115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.142472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.142500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.142733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.142762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.143098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.143126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.143487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.143516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.143857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.143887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.144286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.144316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.144660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.144689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.145061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.145095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.145459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.688 [2024-11-15 11:53:49.145487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.688 qpair failed and we were unable to recover it. 00:30:23.688 [2024-11-15 11:53:49.145818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.145847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.146220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.146248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.146617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.146645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.146986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.147015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.147368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.147397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.147661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.147691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.147952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.147981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.148337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.148365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.148734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.148766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.149122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.149150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.149413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.149441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.149817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.149846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.150204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.150232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.150593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.150621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.150974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.151004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.151377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.151407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.151769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.151799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.152141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.152169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.152483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.152511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.152833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.152862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.153213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.153241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.153601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.153639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.153967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.153995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.154371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.154399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.154729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.154758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.155126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.155155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.155401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.155430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.155775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.155804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.156180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.156208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.156584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.156615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.156978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.157005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.157335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.157366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.157729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.157760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.158105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.158133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.158471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.158499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.158874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.158903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.159274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.159302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.159661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.159690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.160068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.160095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.160423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.160457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.160818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.160848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.689 [2024-11-15 11:53:49.161203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.689 [2024-11-15 11:53:49.161231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.689 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.161592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-15 11:53:49.161625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.161992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-15 11:53:49.162020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.162666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-15 11:53:49.162709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.163047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-15 11:53:49.163084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.163400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-15 11:53:49.163431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.163664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-15 11:53:49.163695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-15 11:53:49.164035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.164067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.164437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.164466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.164807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.164838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.165209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.165238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.165598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.165629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.166027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.166056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.166308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.166338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.166713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.166742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.167125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.167153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.167514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.167543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.167909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.167938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.168318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.168346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.168701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.168731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.169135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.169163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.169511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.169548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.169930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.169959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.170327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.170355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.170725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.170755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.171120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.171154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.171472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.171500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.171853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.171885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.172124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.172155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.172489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.172519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.172858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.172888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.173248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.173276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.173643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.173673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.174067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.174097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.174468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.174496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.174861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.174893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.175249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.175277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.175623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.175663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.175954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.175983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.176372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.176400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.176751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.176780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.177138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.177167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.177539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.177582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.177956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.177986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-15 11:53:49.178395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-15 11:53:49.178424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.178791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.178821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.179196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.179224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.179589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.179619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.180035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.180063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.180420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.180450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.180807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.180836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.181211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.181240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.181471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.181503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.181921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.181953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.182319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.182348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.182701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.182731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.183144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.183172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.183541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.183577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.183899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.183927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.184327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.184354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.184615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.184648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.184925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.184953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.185324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.185352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.185711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.185742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.186105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.186133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.186438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.186465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.186836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.186873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.187220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.187248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.187621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.187652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.187998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.188036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.188296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.188327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.188663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.188692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.188974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.189002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.189362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.189391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.189746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.189775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.190109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.190138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.190497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.190526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.190896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.190925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.191260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.191288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.191651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.191680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.192030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.192059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.192396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.192424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.192772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-15 11:53:49.192803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-15 11:53:49.193093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.193121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.193496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.193523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.193947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.193977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.194304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.194333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.194678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.194707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.195069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.195097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.195460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.195489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.195788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.195817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.196173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.196201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.196551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.196589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.196956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.196989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.197318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.197346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.197719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.197750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.198118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.198146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.198583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.198613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.198978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.199006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.199358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.199387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.199748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.199778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.200036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.200064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.200319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.200346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.200716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.200746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.201107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.201135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.201493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.201521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.201770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.201799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.202143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.202171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.202517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.202545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.202877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.202906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.203237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.203264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.203622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.203652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.204031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.204059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.204405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.204435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.204698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.204727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.205104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.205131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.205489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.205517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.205806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.205836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.206223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.206251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.206607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.206637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.206991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-15 11:53:49.207019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-15 11:53:49.207383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.207413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.207754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.207784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.208049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.208076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.208435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.208463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.208797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.208828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.209164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.209192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.209549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.209589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.209934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.209962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.210317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.210345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.210719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.210748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.211111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.211140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.211501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.211529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.211895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.211925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.212276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.212309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.212674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.212704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.213072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.213101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.213529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.213557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.213711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.213743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.214109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.214138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.214498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.214527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.214876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.214907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.215279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.215309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.215544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.215582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.215925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.215953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.216303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.216340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.216705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.216734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.217092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.217121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.217466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.217495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.217751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.217780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.218144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.218172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.218528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.218556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.218935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.218967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.219323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.219351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.219686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.219716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.219960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.219988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.220350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.220377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.220739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.220769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.221141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.221172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-15 11:53:49.221546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-15 11:53:49.221586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.221928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.221956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.222320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.222354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.222732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.222762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.223123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.223152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.223517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.223547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.223931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.223961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.224333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.224363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.224723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.224752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.225109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.225138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.225501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.225528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.225780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.225809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.226169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.226198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.226578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.226609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.227010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.227039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.227407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.227435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.227736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.227766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.228143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.228172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.228532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.228560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.228928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.228958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.229326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.229355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.229713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.229746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.230111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.230140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.230432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.230461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.230691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.230723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.231082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.231111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.231473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.231503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.231819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.231849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.232232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.232261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.232637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.232667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.233015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.233044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.233376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.233405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.233773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-15 11:53:49.233805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-15 11:53:49.234144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.234173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.234524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.234553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.234918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.234947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.235309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.235338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.235618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.235648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.236018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.236046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.236432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.236461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.236805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.236837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.237210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.237239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.237602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.237633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.237889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.237927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.238191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.238220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.238578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.238608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.238965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.238995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.239252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.239281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.239624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.239654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.240014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.240043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.240403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.240432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.240754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.240784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.241132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.241161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.241501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.241529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.241803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.241832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.242176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.242206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.242557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.242600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.242982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.243011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.243371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.243400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.243732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.243763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.244110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.244140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.244519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.244547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.244890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.244921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.245308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.245337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.245694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.245725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.246086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.246115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.246483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.246512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.246887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.246917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.247251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.247279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.247640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.247672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.248042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.248071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-15 11:53:49.248408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-15 11:53:49.248438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.248789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.248819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.249176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.249205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.249580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.249610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.249927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.249956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.250306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.250337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.250714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.250744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.251108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.251139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.251504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.251533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.251917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.251948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.252314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.252342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.252683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.252714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.252888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.252920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.253304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.253335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.253692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.253723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.254089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.254119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.254460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.254489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.254877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.254911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.255259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.255288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.255657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.255688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.256063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.256447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.256477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.256821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.256853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.257213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.257243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.257606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.257638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.258030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.258061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.258410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.258440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.258784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.258815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.259083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.259112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.259503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.259534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.259967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.259998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.260350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.260379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.260743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.260776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.261150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.261179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.261544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.261588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.261981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-15 11:53:49.262371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-15 11:53:49.262401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.262749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.262782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.263011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.263046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.263403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.263435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.263788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.263829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.264175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.264206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.264556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.264603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.264969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.265000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.265361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.265391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.265742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.265775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.266138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.266169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.266527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.266559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.266992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.267023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.267385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.267416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.267796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.267826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.268193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.268223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.268585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.268615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.269009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.269039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.269393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.269423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.269827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.269860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.270108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.270136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.270421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.270672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.270702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.271101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.271132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.271511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.271541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.271839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.271870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.272184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.272211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.272582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.272613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.272888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.272919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.273293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.273323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.273497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.273527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.273951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.273982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.274344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.274373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.274728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.274759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.275105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.275134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.275404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.275434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.275789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.275819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.276163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.276192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.276558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-15 11:53:49.276604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-15 11:53:49.276972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.277002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.277378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.277408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.277761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.277791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.278154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.278184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.278547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.278590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.278836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.278865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.279254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.279284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.279645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.279676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.279958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.279989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.280350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.280380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.280797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.280831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.281236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.281266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.281643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.281676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.281969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.281999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.282363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.282393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.282744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.282775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.283071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.283100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.283484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.283512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.283978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.284009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.284274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.284307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.284733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.284767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.285156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.285187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.285551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.285591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.285847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.285876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.286251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.286281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.286644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.286676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.287043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.287072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.287435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.287466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.287826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.287858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.288210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.288239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.288473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.288501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.288912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.288944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.289130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.289159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.289558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.289608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.289900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.289935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.290284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.290314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.290688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.290719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.291045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.291079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-15 11:53:49.291310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-15 11:53:49.291341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.291721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.291758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.292110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.292141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.292508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.292538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.292918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.292948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.293311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.293344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.293519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.293547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.293951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.293983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.294355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.294387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.294651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.294681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.295047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.295077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.295432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.295462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.295731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.295761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.296023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.296053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.296411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.296441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.296831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.296863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.297123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.297151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.297590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.297622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.297996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.298026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.298375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.298405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.298662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.298692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.299112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.299142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.299521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.299552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.299955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.299985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.300330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.300358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.300712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.300742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.301097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.301134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.301483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.301511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.301916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.301949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.302310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.302341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.302609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.302642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.302986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.303018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.303362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.303391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.303746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.303779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-15 11:53:49.304170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-15 11:53:49.304203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.304530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.304560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.304948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.304986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.305338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.305369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.305722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.305755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.306015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.306046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.306397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.306429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.306790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.306821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.307156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.307186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.307517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.307547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.308026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.308056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.308416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.308446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.308732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.308765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.309154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.309184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.309544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.309585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.309933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.309963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.310352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.310382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.310723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.310753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.311126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.311156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.311396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.311428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.311687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.311717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.312080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.312111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.312524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.312552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.312938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.312966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.313331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.313359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.313601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.313633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.313979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.314008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.314342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.314371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.314729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.314759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.315128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.315163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.315535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.315598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.315925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.315953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.316206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.316233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.316593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.316623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.317007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.317035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.317399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.317426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.317836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.317865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.318232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.318262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.318638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.318667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-15 11:53:49.318967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-15 11:53:49.318994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.319358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.319386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.319723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.319752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.320096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.320124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.320477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.320508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.320851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.320881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.321203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.321231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.321582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.321612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.321979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.322008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.322375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.322403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.322752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.322783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.323126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.323154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.323515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.323544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.323823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.323852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.324218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.324247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.324601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.324630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.324999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.325027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.325282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.325310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.325638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.325668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.326045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.326072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.326418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.326447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.326818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.326847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.327164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.327195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.327525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.327553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.327929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.327958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.328333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.328361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.328700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.328731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.329075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.329102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.329464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.329493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.329760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.329793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.330184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.330212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.330587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.330628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.330902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.330930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.331280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.331309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.331683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.331713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.332057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.332092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.332419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.332447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.332808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.332839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.333100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-15 11:53:49.333129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-15 11:53:49.333369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.333400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.333755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.333785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.334125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.334153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.334475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.334503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.334856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.334886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.335268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.335297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.335666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.335696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.336060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.336088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.336444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.336471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.336861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.337237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.337265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.337636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.337665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.337941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.338309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.338337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.338678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.338709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.339053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.339081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.339443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.339471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.339743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.339772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.340166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.340194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.340438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.340472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.340874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.340904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.341261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.341291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.341661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.341690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.342060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.342095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.342462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.342490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.342626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.342659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.343032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.343061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.343412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.343441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.343799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.343828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.344158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.344186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.344574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.344605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.344947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.344975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.345230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.345259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.345606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.345638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.346031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.346059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.346314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.346342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.346692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.346722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.347063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.347092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.347445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-15 11:53:49.347473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-15 11:53:49.347830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.347860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.348210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.348241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.348620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.348649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.349013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.349042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.349400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.349428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.349791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.349821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.350183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.350213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.350583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.350614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.350936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.350964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.351413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.351441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.351814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.351845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.352217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.352245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.352621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.352652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.353002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.353030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.353370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.353400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.353744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.353772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.354150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.354178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.354435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.354466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.354836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.354865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.355208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.355238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.355574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.355607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.355997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.356032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.356390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.356418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.356807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.356837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.357109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.357137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.357375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.357408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.357789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.357817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.358211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.358239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.358608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.358637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.358996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.359023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.359375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.359403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.359765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.359797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.360138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.360166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.360467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.360494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.360860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-15 11:53:49.360890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-15 11:53:49.361121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.361149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.361520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.361548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.361955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.361983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.362336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.362370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.362759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.362790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.363159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.363187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.363540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.363583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.363938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.363966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.364200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.364231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.364591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.364621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.365017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.365047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.365392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.365420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.365777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.365805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.366140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.366169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.366535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.366572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.366924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.366952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.367318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.367352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.367678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.367707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.368067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.368096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.368460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.368489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.368847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.368876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.369194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.369222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.369582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.369613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.370011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.370040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.370365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.370395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.370648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.370680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.371107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.371135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.371391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.371420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.371811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.371841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.372217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.372246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.372608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.372638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.373004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.373032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.373398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.373430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.373760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.373790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.374150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.374178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.374550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.374588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.374978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.375007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.375383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-15 11:53:49.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-15 11:53:49.375804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.375833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.376168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.376197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.376420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.376452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.376825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.376856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.377212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.377241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.377602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.377632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.377993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.378022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.378351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.378740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.378769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.379127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.379155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.379501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.379529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.379915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.379947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.380290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.380318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.380547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.380592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.380994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.381022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.381392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.381420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.381793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.381829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.382197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.382226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.382593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.382622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.382856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.382887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.383254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.383283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.383624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.383654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.384003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.384031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.384364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.384395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.384743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.384774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.385181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.385210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.385600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.385630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.385995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.386024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.386461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.386490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.386854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.386883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.387242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.387270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.387652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.387682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.388011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.388040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.388387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.388415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.388753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.388782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.389150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.389178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.389537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.389575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.389939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-15 11:53:49.389966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-15 11:53:49.390353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.390380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.390756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.390787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.391135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.391163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.391527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.391555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.391930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.391958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.392195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.392223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.392581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.392613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.392956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.392984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.393345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.393373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.393741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.393772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.394142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.394170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.394589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.394975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.395004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.395367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.395396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.395750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.395780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.396145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.396173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.396539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.396575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.396930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.396958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.397327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.397354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.397749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.397778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.398170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.398198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.398572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.398601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.398964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.398992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.399329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.399357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.399745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.399776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.400016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.400044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.400295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.400323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.400692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.400722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.401121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.401150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.401533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.401570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.401807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.401837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.402194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.402222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.402587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.402617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.402957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.402986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.403345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.403373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.403745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.403775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.404126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.404154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.404525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.404554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-15 11:53:49.404909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-15 11:53:49.404938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.405202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.405230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.405627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.405656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.406050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.406079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.406440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.406468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.406839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.406868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.407225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.407254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.407486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.407514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.407899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.407940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.408273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.408302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.408662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.408693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.409061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.409089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.409455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.409484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.409851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.409880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.410266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.410293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.410662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.410693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.411106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.411134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.411491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.411519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.411896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.411928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.412156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.412183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.412521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.412549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.412853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.412885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.413284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.413313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.413676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.413706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.413937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.413965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.414329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.414363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.414719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.414749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.415117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.415146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.415516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.415552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.415914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.415943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.416315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.416344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.416590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.416623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.416987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.417016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.417382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.417410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.417677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.417706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.418072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.418101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.418463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.418492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.418859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.418890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.419241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-15 11:53:49.419268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-15 11:53:49.419621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.419650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.420001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.420030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.420390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.420418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.420681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.420710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.421083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.421112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.421443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.421471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.421854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.421884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.422222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.422250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.422618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.422647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.422999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.423028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.423385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.423419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.423807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.423837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.424178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.424206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.424643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.424673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.425023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.425051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.425437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.425466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.425799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.425829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.426209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.426237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.426601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.426632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.427021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.427419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.427449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.427803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.427833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.428173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.428209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.428471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.428501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.428840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.428870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.429233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.429262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.429644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.429674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.430032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.430060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.430388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.430416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.430781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.430811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.431184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.431212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.431558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.431609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.431990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.432018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.432448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.432477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-15 11:53:49.432823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-15 11:53:49.432852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.433260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.433288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.433630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.433659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.434022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.434056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.434394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.434424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.434775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.434804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.435114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.435142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.435493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.435521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.435894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.435924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.436288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.436316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.436696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.436726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.436966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.436997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.437383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.437412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.437742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.437772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.438154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.438183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.438559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.438598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.438963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.438993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.439353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.439383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.439756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.439786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.440127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.440155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.440471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.440500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.440868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.440897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.441148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.441176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.441541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.441581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.441848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.441875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.442229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.442257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.442625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.442656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.443007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.443037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.443379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.443407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.443767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.443796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.444139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.444167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.444402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.444434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.444706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.444738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.445095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.445124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-15 11:53:49.445506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-15 11:53:49.445535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.445919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.445953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.446314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.446344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.446695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.446726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.447056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.447084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.447416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.447445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.447788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.447818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.448211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.448240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.448620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.448649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.448909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.448938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.449314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.449348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.449708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.449739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.450058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.450086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.450455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.450484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.450835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.450864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.451225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.451253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.451596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.451626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.451988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.452016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.452380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.452407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.452766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.452797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.453119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.453147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.453508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.453536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.453898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.453927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.253 qpair failed and we were unable to recover it. 00:30:24.253 [2024-11-15 11:53:49.454293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.253 [2024-11-15 11:53:49.454322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.454697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.454727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.455091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.455119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.455476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.455504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.455896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.455926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.456259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.456288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.456639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.456668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.457040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.457069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.457462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.457491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.457850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.457880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.458124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.458151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.458492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.458520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.458881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.458910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.459261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.459289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.459642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.459678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.460055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.460085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.460478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.460507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.460759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.460788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.461124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.461162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.461539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.461576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.461850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.461878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.462259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.462286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.462581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.462610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.462759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.462790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.463181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.463209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.463534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.463586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.463954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.463984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.464344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.464372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.464739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.464770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.465115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.465144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.465543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.465915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.465944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.466306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.466336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.466681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.466710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.467073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.467102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.467434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.467462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.467775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.467805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.468177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.468205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.468577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.254 [2024-11-15 11:53:49.468607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.254 qpair failed and we were unable to recover it. 00:30:24.254 [2024-11-15 11:53:49.468961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.468989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.469379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.469407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.469647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.469679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.470061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.470090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.470366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.470396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.470756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.470785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.471147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.471174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.471601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.471632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.471980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.472008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.472208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.472237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.472626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.472655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.473020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.473048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.473408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.473435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.473785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.473815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.474155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.474183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.474543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.474582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.474946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.474979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.475333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.475362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.475729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.475759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.476113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.476141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.476515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.476543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.476907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.476937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.477193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.477221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.477584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.477613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.478033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.478061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.478392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.478421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.478796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.478826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.479188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.479217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.479597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.479628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.479985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.480013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.480380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.480409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.480675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.480704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.481069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.481098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.481459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.481487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.481825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.481855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.482222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.482251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.482605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.482633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.483027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.483055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.255 qpair failed and we were unable to recover it. 00:30:24.255 [2024-11-15 11:53:49.483421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.255 [2024-11-15 11:53:49.483450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.483822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.483852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.484200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.484228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.484595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.484625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.484976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.485003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.485364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.485405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.485749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.485779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.486111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.486140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.486515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.486543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.486925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.486955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.487325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.487356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.487706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.487736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.487996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.488024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.488379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.488407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.488783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.488813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.489206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.489235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.489606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.489636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.489988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.490015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.490370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.490398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.490765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.490796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.491125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.491153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.491325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.491356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.491720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.491750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.492107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.492136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.492457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.492485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.492854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.492884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.493237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.493265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.493655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.493688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.494033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.494061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.494412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.494440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.494812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.494842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.495204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.495231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.495597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.495627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.496008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.496037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.496388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.496415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.496760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.496789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.497161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.497190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.256 [2024-11-15 11:53:49.497544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.256 [2024-11-15 11:53:49.497596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.256 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.497945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.497974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.498335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.498365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.498686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.498715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.499051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.499080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.499446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.499476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.499868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.499898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.500227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.500255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.500607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.500637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.500989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.501023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.501374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.501403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.501651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.501683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.502029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.502058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.502419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.502447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.502812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.502842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.503156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.503185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.503527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.503555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.503921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.503951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.504374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.504748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.504778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.505104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.505132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.505493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.505523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.505863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.505895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.506267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.506296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.506641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.506671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.507008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.507036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.507388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.507416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.507787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.507818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.508161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.508191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.508435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.508467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.508864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.508896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.509233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.509620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.509652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.510022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.510052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.510418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.510446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.257 qpair failed and we were unable to recover it. 00:30:24.257 [2024-11-15 11:53:49.510807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.257 [2024-11-15 11:53:49.510841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.511198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.511229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.511475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.511505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.511902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.511933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.512313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.512344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.512730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.512761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.513124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.513155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.513573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.513604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.513833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.513866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.514214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.514243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.514610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.514643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.515031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.515062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.515450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.515478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.515802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.515834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.516181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.516210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.516550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.516594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.516980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.517010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.517373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.517405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.517782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.517813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.518159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.518188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.518584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.518614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.518946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.518977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.519340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.519369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.519721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.519751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.520147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.520176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.520502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.520529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.520901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.520934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.521288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.521319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.521715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.521745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.522106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.522136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.522379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.522409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.522822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.522853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.523246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.523276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.523598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.523629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.523982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.524011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.524385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.524416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.524811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.258 [2024-11-15 11:53:49.524841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.258 qpair failed and we were unable to recover it. 00:30:24.258 [2024-11-15 11:53:49.525213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.525243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.525605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.525636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.525983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.526013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.526284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.526313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.526649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.526680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.527056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.527092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.527329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.527360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.527728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.527759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.528084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.528114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.528373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.528400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.528754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.528787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.529151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.529180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.529545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.529591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.529839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.529868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.530218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.530247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.530600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.530630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.531021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.531049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.531398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.531427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.531804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.531836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.532235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.532264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.532629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.532661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.533013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.533043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.533307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.533338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.533742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.533772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.534095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.534123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.534490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.534519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.534893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.534924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.535275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.535304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.535651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.535682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.535955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.535984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.536329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.536360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.536715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.536750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.537081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.537111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.537466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.537495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.537865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.537896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.538271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.538299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.538665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.538695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.259 [2024-11-15 11:53:49.539024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.259 [2024-11-15 11:53:49.539052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.259 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.539426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.539455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.539739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.539769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.540151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.540181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.540551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.540942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.540972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.541362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.541392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.541639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.541668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.542015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.542044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.542409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.542445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.542701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.542730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.543116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.543146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.543499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.543529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.543887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.543918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.544310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.544339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.544575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.544609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.544969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.544999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.545302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.545330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.545722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.545752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.546108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.546137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.546495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.546525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.546892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.546922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.547256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.547285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.547610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.547642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.548016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.548044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.548404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.548435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.548831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.548862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.549228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.549256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.549601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.549979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.550007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.550436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.550466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.550826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.550855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.551210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.551239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.551596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.551627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.552026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.552056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.552427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.552456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.552821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.552857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.553187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.553217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.553589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.260 [2024-11-15 11:53:49.553621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.260 qpair failed and we were unable to recover it. 00:30:24.260 [2024-11-15 11:53:49.553985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.554013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.554368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.554397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.554773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.554803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.555215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.555244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.555600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.555629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.555983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.556013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.556347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.556377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.556727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.556758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.557131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.557159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.557508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.557536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.557919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.557950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.558311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.558340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.558724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.558755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.559107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.559136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.559500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.559531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.559912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.559942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.560192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.560220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.560645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.560675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.561042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.561071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.561428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.561457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.561828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.561858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.562209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.562239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.562488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.562519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.562903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.562934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.563268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.563299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.563664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.563695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.564059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.564088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.564476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.564506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.564876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.564906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.565255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.565284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.565638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.565668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.566032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.566061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.566431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.566459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.566822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.566852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.567210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.567240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.567541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.567596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.567955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.567983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.568340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.261 [2024-11-15 11:53:49.568370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.261 qpair failed and we were unable to recover it. 00:30:24.261 [2024-11-15 11:53:49.568724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.568761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.569008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.569037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.569373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.569401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.569775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.569805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.570155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.570184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.570511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.570541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.570813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.570842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.571209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.571238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.571600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.571632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.572026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.572055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.572374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.572405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.572756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.572788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.573133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.573162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.573524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.573554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.573954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.573984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.574336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.574366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.574726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.574764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.575161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.575190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.575477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.575505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.575853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.575885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.576238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.576266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.576628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.576657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.576904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.576932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.577298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.577328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.577676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.577708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.578071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.578101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.578465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.578496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.578878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.578913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.579259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.579287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.579684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.579715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.580079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.580107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.580499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.580529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.580897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.580929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.262 qpair failed and we were unable to recover it. 00:30:24.262 [2024-11-15 11:53:49.581319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.262 [2024-11-15 11:53:49.581349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.581718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.581749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.582098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.582127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.582487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.582517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.582928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.582960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.583207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.583237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.583611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.583641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.583996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.584024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.584389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.584419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.584744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.584773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.585137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.585166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.585521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.585551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.585838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.586117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.586145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.586285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.586316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.586691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.586722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.587115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.587144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.587458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.587488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.587826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.587856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.588106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.588134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.588533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.588572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.588912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.588940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.589293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.589322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.589689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.589718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.590046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.590074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.590449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.590476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.590829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.590860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.591207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.591235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.591602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.591632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.592002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.592030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.592394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.592424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.592813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.592842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.593207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.593236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.593601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.593630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.594000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.594029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.594370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.594403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.594764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.594794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.595138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.595165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.595531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.263 [2024-11-15 11:53:49.595561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.263 qpair failed and we were unable to recover it. 00:30:24.263 [2024-11-15 11:53:49.595923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.595952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.596310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.596339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.596717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.596747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.597094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.597123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.597474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.597501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.597825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.597854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.598186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.598214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.598581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.598611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.598963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.598992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.599357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.599385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.599758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.599787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.600110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.600138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.600499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.600527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.600906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.600936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.601259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.601288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.601640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.601671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.602047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.602076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.602473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.602501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.602886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.602916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.603320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.603645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.603675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.604037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.604066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.604395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.604424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.604763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.604798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.605155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.605183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.605500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.605528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.605892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.605923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.606265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.606301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.606670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.606699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.607067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.607095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.607418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.607445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.607822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.607851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.608220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.608249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.608616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.608645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.609012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.609040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.609290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.609318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.609570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.609599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.609972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.610000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.264 qpair failed and we were unable to recover it. 00:30:24.264 [2024-11-15 11:53:49.610273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.264 [2024-11-15 11:53:49.610302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.610673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.610702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.611075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.611103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.611498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.611527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.611896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.611926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.612158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.612551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.612593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.612966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.612995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.613230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.613257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.613615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.613659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.614010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.614039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.614404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.614433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.614768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.614797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.615153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.615182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.615540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.615577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.615934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.615961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.616334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.616362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.616723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.616752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.617112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.617140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.617502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.617531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.617894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.617924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.618257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.618294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.618525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.618557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.618861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.618889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.619265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.619294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.619647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.619677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.620055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.620089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.620522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.620550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.620906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.620935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.621298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.621325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.621683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.621712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.622110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.622139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.622377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.622405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.622647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.622675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.623035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.623064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.623420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.623449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.623803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.623834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.624193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.624221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.624587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.265 [2024-11-15 11:53:49.624617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.265 qpair failed and we were unable to recover it. 00:30:24.265 [2024-11-15 11:53:49.625012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.625040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.625411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.625438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.625877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.625907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.626263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.626292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.626546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.626588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.626944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.626972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.627327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.627356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.627737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.627766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.628132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.628160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.628529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.628557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.628898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.628927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.629281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.629309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.629667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.629697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.630065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.630093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.630451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.630480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.630835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.630866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.631264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.631292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.631655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.631687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.632035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.632063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.632423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.632451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.632822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.632850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.633175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.633552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.633591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.633930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.633958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.634323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.634352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.634731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.634760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.635119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.635147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.635509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.635537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.635910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.635939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.636187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.636215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.636582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.636613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.636973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.637001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.637376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.637403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.637769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.637799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.638140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.638168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.638533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.638571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.638931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.638960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.639290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.639318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.639713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.266 [2024-11-15 11:53:49.639743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.266 qpair failed and we were unable to recover it. 00:30:24.266 [2024-11-15 11:53:49.640105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.640133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.640489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.640517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.640843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.640873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.641176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.641204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.641586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.641616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.641931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.641958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.642345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.642373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.642735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.642765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.643009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.643040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.643398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.643426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.643754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.643784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.644171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.644199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.644570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.644599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.644947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.644975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.645341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.645369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.645748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.645778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.646207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.646242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.646587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.646616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.646947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.646977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.647320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.647349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.647708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.647736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.648129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.648158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.648522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.648551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.648907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.648935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.649302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.649331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.649724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.649754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.650097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.650125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.650454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.650482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.650827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.650858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.651212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.651240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.651605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.651634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.651989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.652017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.652379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.652407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.652769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.652799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.653180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.267 [2024-11-15 11:53:49.653208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.267 qpair failed and we were unable to recover it. 00:30:24.267 [2024-11-15 11:53:49.653605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.653636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.653992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.654020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.654281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.654312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.654656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.654686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.655056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.655084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.655428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.655457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.655848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.655878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.656196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.656223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.656492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.656520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.656908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.656939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.657339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.657366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.657737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.657768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.658126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.658154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.658521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.658550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.658936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.658965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.659300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.659329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.659756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.659785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.660149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.660177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.660552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.660588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.660917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.660945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.661270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.661298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.661645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.661675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.662040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.662074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.662419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.662447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.662689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.662721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.663095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.663124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.663482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.663511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.663852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.663883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.664164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.664193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.664536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.664572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.664947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.664977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.665304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.665335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.665707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.665737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.666109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.666138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.666532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.666560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.666963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.667324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.667353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.667721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.667751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.268 qpair failed and we were unable to recover it. 00:30:24.268 [2024-11-15 11:53:49.668110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.268 [2024-11-15 11:53:49.668137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.668510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.668539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.668894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.668924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.669360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.669389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.669784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.669814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.670187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.670214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.670577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.670607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.670955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.670984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.671381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.671409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.671764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.671793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.672158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.672435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.672469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.672818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.672847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.673198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.673226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.673584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.673613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.673939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.673968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.674332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.674361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.674729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.674758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.675117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.675145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.675503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.675531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.675899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.675929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.676274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.676302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.676663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.676693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.677054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.677081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.677444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.677472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.677815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.677846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.678181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.678209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.678577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.678608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.678920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.678948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.679271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.679299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.679634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.679665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.680104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.680132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.680461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.680489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.680874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.680904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.681234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.681263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.681623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.681652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.682018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.682046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.269 qpair failed and we were unable to recover it. 00:30:24.269 [2024-11-15 11:53:49.682422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.269 [2024-11-15 11:53:49.682450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.682819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.682848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.683226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.683254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.683620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.683651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.684019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.684047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.684355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.684383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.684745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.684775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.685132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.685160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.685537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.685572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.685827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.685855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.686194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.686222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.686602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.686631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.687013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.687041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.687400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.687428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.687769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.687800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.688163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.688198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.688557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.688598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.688950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.688978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.689342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.689370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.689734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.689763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.690130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.690158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.690517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.690546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.690912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.690941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.691298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.691326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.691600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.691631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.691987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.692016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.692379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.692407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.692669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.692699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.693052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.693079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.693468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.693496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.693835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.693864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.694229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.694258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.694643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.694671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.695033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.695061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.695418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.695447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.695791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.695820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.696195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.696224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.696586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.696616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.696972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.270 [2024-11-15 11:53:49.697000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.270 qpair failed and we were unable to recover it. 00:30:24.270 [2024-11-15 11:53:49.697373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.697403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.697755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.697785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.698160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.698189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.698552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.698595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.698994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.699023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.699396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.699424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.699788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.699818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.700153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.700181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.700542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.700596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.700958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.700986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.701346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.701374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.701747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.701776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.702106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.702134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.702506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.702533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.702901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.702930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.703180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.703211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.703582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.703612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.703988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.704017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.704260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.704290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.704553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.704591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.704955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.704983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.705304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.705332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.705717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.705747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.706111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.706139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.706504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.706532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.706772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.706803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.707169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.707196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.707556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.707604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.707965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.707993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.708369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.708397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.708757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.708787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.709149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.709177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.709505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.709534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.271 [2024-11-15 11:53:49.709907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.271 [2024-11-15 11:53:49.709936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.271 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.710302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.710331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.710582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.710613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.710972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.711000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.711372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.711399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.711658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.711689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.712124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.712152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.712478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.712507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.712878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.712908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.713286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.713314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.713679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.713709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.714071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.714105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.714431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.714459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.714798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.714828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.715066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.715097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.715454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.715483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.715857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.715887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.716234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.716262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.716620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.716649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.716975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.717003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.717181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.717213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.717588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.717618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.717998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.718027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.718385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.718412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.718793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.718823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.719239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.719268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.719638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.719669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.720035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.720063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.720433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.720460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.720792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.720823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.721254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.721282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.721611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.721640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.721997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.722025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.722359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.722387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.722788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.722819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.723187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.723215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.723556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.723604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.723952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.723980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.724341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.724375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.272 qpair failed and we were unable to recover it. 00:30:24.272 [2024-11-15 11:53:49.724735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.272 [2024-11-15 11:53:49.724766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.725136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.725165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.725524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.725553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.725912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.725941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.726285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.726314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.726682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.726711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.727069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.727097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.727456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.727484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.727866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.727895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.728225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.728253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.728590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.728620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.728852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.728880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.729254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.729282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.729663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.729692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.730052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.730080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.730450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.730478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.730835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.730865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.731226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.731253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.731611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.731640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.732003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.732031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.732407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.732435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.732771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.732801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.733164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.733192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.733553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.733592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.733949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.733978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.734339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.734367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.734712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.734742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.735098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.735126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.735442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.735470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.735815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.735845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.736206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.736234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.736598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.736628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.736955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.736983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.737362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.737390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.737747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.273 [2024-11-15 11:53:49.737776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.273 qpair failed and we were unable to recover it. 00:30:24.273 [2024-11-15 11:53:49.738139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.738168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.738583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.738615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.738972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.739000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.739370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.739399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.739771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.739801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.740172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.740206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.740585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.740615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.740962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.740993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.741246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.741277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.741635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.741666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.742049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.742077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.742424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.548 [2024-11-15 11:53:49.742452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.548 qpair failed and we were unable to recover it. 00:30:24.548 [2024-11-15 11:53:49.742817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.742846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.743203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.743232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.743465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.743494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.743719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.743751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.744106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.744134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.744467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.744496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.744868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.744898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.745254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.745282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.745517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.745549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.745801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.745830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.746202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.746231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.746586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.746617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.747023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.747051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.747411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.747440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.747773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.747803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.748157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.748186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.748554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.748593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.748968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.748996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.749321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.749350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.749696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.749726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.750092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.750127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.750469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.750497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.750853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.750883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.751242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.751270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.751636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.751665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.752030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.752059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.752430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.752459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.752815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.752845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.753196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.753224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.753614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.753645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.753860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.753888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.754286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.754314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.754690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.754720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.755113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.755142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.755485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.755515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.755913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.755943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.549 [2024-11-15 11:53:49.756298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.549 [2024-11-15 11:53:49.756328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.549 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.756656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.756686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.757023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.757051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.757389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.757417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.757770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.757802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.758147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.758178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.758539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.758581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.758932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.758962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.759323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.759353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.759743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.759774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.760076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.760104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.760477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.760507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.760876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.760906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.761272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.761557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.761602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.761964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.761993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.762236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.762264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.762631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.762660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.763045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.763073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.763396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.763425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.763777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.763806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.764175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.764204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.764543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.764584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.764938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.764968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.765333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.765363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.765726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.765762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.766097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.766126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.766497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.766526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.766868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.766899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.767184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.767212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.767592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.767623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.767976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.768004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.768377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.768408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.768807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.768836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.769201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.769231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.769591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.769623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.769980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.770008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.550 [2024-11-15 11:53:49.770368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.550 [2024-11-15 11:53:49.770396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.550 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.770721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.770751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.771105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.771134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.771502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.771532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.771942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.771972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.772233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.772264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.772635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.772666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.773005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.773035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.773409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.773438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.773770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.773801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.774156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.774185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.774556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.774600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.774974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.775004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.775223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.775255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.775605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.775637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.776022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.776052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.776404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.776432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.776772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.776802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.777158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.777187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.777543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.777584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.777902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.777930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.778305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.778335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.778694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.778725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.779095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.779124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.779369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.779398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.779773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.779804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.780158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.780188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.780552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.780594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.781002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.781030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.781347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.781377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.781731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.781764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.782025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.782053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.782410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.782439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.782773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.782806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.783140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.783168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.783543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.783582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.783945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.783974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.784230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.784262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.784628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.551 [2024-11-15 11:53:49.784658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.551 qpair failed and we were unable to recover it. 00:30:24.551 [2024-11-15 11:53:49.785031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.785062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.785468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.785498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.785852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.785883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.786220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.786250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.786589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.786619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.786970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.786998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.787380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.787407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.787782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.787811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.788172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.788200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.788582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.788611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.788905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.788933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.789187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.789214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.789589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.789619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.789938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.789967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.790350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.790379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.790731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.790761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.791110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.791139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.791508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.791541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.791922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.791951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.792316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.792345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.792708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.792737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.793138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.793166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.793541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.793586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.793973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.794001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.794365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.794393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.794757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.794787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.795169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.795196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.795558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.795598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.795953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.795981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.796350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.796377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.796753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.796782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.796939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.796970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.797350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.797378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.797738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.797768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.798149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.798177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.798536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.798574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.798943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.798971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.552 [2024-11-15 11:53:49.799336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.552 [2024-11-15 11:53:49.799364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.552 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.799745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.799775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.800141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.800170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.800536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.800949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.800977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.801298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.801325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.801695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.801724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.802097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.802125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.802491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.802520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.802893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.802923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.803273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.803303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.803675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.803705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.804121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.804149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.804469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.804496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.804871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.804900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.805282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.805310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.805635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.805664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.806032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.806061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.806416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.806445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.806684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.806713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.807087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.807114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.807489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.807523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.807879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.807910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.808272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.808300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.808666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.808696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.809059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.809088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.809449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.809478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.809849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.809879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.810244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.810272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.810639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.810668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.811026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.811054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.811417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.811445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.811795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.553 [2024-11-15 11:53:49.811825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.553 qpair failed and we were unable to recover it. 00:30:24.553 [2024-11-15 11:53:49.812211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.812239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.812557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.812597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.812955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.812984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.813340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.813369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.813755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.813785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.814147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.814175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.814544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.814583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.814957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.814984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.815331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.815360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.815755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.815785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.816146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.816174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.816542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.816579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.816949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.816978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.817324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.817352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.817718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.817749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.818076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.818109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.818371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.818399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.818729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.818759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.819115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.819399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.819427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.819761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.819790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.820135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.820164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.820523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.820551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.820890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.820918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.821248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.821276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.821637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.821681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.822027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.822054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.822463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.822492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.822814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.822843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.823207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.823235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.823602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.823632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.823997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.824025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.824393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.824421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.824789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.824818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.825083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.825115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.825477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.825505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.825911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.825942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.826297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.826326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.554 [2024-11-15 11:53:49.826691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.554 [2024-11-15 11:53:49.826721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.554 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.827085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.827114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.827494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.827522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.827864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.827894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.828272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.828300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.828632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.828662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.829030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.829058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.829460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.829489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.829729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.829762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.830128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.830156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.830508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.830535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.830900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.830931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.831280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.831308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.831680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.831710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.832093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.832121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.832487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.832517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.832877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.832906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.833279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.833307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.833648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.833684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.834076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.834104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.834474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.834502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.834888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.834917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.835283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.835312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.835664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.835695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.836037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.836066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.836415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.836443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.836828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.836857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.837213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.837241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.837484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.837515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.837874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.837904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.838282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.838309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.838606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.838636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.839062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.839091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.839483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.839512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.839879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.839909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.840266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.840294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.840668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.840698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.841064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.841091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.841451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.555 [2024-11-15 11:53:49.841478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.555 qpair failed and we were unable to recover it. 00:30:24.555 [2024-11-15 11:53:49.841843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.841873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.842230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.842259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.842624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.842654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.843008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.843036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.843391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.843419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.843755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.843785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.844159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.844193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.844532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.844560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.844924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.844952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.845324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.845353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.845744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.845774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.846110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.846138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.846512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.846540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.846882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.846912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.847311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.847339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.847604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.847634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.847965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.847993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.848358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.848386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.848758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.848788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.849121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.849150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.849507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.849536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.849907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.849937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.850196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.850228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.850467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.850499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.850852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.850882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.851231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.851258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.851604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.851634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.851854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.851887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.852265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.852293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.852659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.852689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.853060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.853087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.853418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.853446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.853806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.853835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.854213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.854242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.854630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.854660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.855031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.855059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.855418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.855446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.855800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.556 [2024-11-15 11:53:49.855830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.556 qpair failed and we were unable to recover it. 00:30:24.556 [2024-11-15 11:53:49.856195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.856223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.856607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.856637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.856994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.857022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.857387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.857416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.857788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.857818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.858178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.858206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.858574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.858604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.858913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.858940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.859300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.859330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.859702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.859737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.860107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.860136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.860486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.860514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.860887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.860917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.861291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.861320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.861662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.861691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.862050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.862078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.862397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.862426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.862786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.862814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.863179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.863207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.863582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.863611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.863976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.864004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.864355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.864382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.864766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.864796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.865139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.865167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.865495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.865523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.865853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.865883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.866245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.866273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.866511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.866539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.866904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.866934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.867296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.867324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.867684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.867714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.868090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.868118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.868483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.868512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.868884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.868915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.869263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.557 [2024-11-15 11:53:49.869291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.557 qpair failed and we were unable to recover it. 00:30:24.557 [2024-11-15 11:53:49.869650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.869681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.870034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.870068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.870441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.870468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.870808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.870837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.871237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.871265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.871647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.871676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.872045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.872073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.872429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.872457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.872804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.872833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.873199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.873603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.873633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.873964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.873992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.874352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.874380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.874664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.874693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.875077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.875104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.875468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.875497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.875856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.875885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.876252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.876280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.876640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.876670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.877018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.877046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.877370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.877398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.877678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.877708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.878032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.878060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.878284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.878315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.878687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.878717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.879084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.879112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.879447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.879474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.879851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.879881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.880231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.880259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.880614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.880644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.880979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.881008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.881352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.881750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.881780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.882096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.882124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.882442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.882469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.882817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.882847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.883215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.883243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.883609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.883638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.558 [2024-11-15 11:53:49.884021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.558 [2024-11-15 11:53:49.884050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.558 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.884396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.884425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.884763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.884792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.885157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.885185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.885554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.885609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.885954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.885982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.886348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.886376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.886738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.886768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.887148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.887176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.887532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.887560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.887897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.887925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.888302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.888330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.888702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.888731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.889089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.889117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.889458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.889486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.889780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.889810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.890155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.890183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.890553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.890593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.890947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.890975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.891373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.891402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.891784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.891815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.892153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.892181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.892534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.892570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.892905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.892935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.893162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.893193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.893577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.893608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.893959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.893987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.894355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.894382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.894755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.894785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.895147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.895176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.895541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.895579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.895943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.895971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.896318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.896346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.896754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.896784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.897137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.897165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.897548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.897592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.897827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.897855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.898217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.898245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.898605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.559 [2024-11-15 11:53:49.898634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.559 qpair failed and we were unable to recover it. 00:30:24.559 [2024-11-15 11:53:49.899005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.899033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.899405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.899434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.899792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.899822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.900159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.900187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.900444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.900476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.900817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.900847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.901210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.901239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.901604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.901634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.901966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.901994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.902332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.902360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.902733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.902764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.903123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.903151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.903548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.903599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.903961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.903990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.904356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.904384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.904761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.904792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.905172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.905199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.905594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.905625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.905974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.906004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.906365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.906393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.906801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.906831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.907171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.907200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.907570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.907599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.907960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.907988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.908322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.908595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.908625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.908999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.909027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.909282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.909313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.909682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.909712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.910034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.910064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.910418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.910446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.910790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.910820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.911177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.911206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.911584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.911620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.912016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.912377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.912407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.912771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.912803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.913210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.913590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.560 [2024-11-15 11:53:49.913620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.560 qpair failed and we were unable to recover it. 00:30:24.560 [2024-11-15 11:53:49.913955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.913984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.914353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.914381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.914724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.914755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.915116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.915144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.915508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.915537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.915909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.915939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.916306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.916335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.916698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.916731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.917098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.917129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.917495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.917523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.917807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.917840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.918182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.918211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.918574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.918604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.918962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.918991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.919351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.919378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.919742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.919774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.920130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.920159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.920533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.920576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.920937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.920966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.921338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.921367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.921739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.921770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.922134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.922163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.922509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.922539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.922886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.922917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.923281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.923310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.923705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.923737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.924105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.924133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.924486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.924514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.924902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.924933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.925289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.925318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.925587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.925617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.926014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.926043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.926296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.926325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.926642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.926672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.927015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.927044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.927408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.927438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.927775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.927805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.928171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.928199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.561 qpair failed and we were unable to recover it. 00:30:24.561 [2024-11-15 11:53:49.928583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.561 [2024-11-15 11:53:49.928613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.928963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.928991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.929352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.929381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.929738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.929768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.930017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.930045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.930424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.930452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.930708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.930740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.931096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.931124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.931444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.931472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.931815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.931844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.932235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.932266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.932639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.932670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.933057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.933086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.933438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.933467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.933810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.933842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.934217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.934245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.934617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.934648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.934998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.935027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.935387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.935415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.935805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.935835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.936092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.936122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.936487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.936516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.936879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.936909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.937269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.937298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.937663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.937699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.938055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.938084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.938510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.938538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.938928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.938959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.939334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.939363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.939752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.940114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.940143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.940416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.940446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.940785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.940815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.941181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.941210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.941609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.941640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.562 qpair failed and we were unable to recover it. 00:30:24.562 [2024-11-15 11:53:49.942033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.562 [2024-11-15 11:53:49.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.942428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.942457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.942786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.942815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.943177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.943207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.943578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.943610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.943963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.943992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.944345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.944374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.944621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.944652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.944969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.944999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.945367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.945395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.945828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.945859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.946240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.946268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.946648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.946678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.947050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.947079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.947438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.947469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.947815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.947848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.948205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.948234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.948590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.948622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.948995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.949024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.949387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.949417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.949663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.949693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.950014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.950044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.950450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.950478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.950812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.950842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.951210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.951238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.951608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.951638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.951991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.952020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.952380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.952413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.952797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.952828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.953205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.953235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.953598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.953641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.953997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.954413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.954442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.954782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.954811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.955169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.955197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.955584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.955615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.955962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.955991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.956327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.956357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.956695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.563 [2024-11-15 11:53:49.956725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.563 qpair failed and we were unable to recover it. 00:30:24.563 [2024-11-15 11:53:49.957076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.957104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.957328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.957360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.957708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.957738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.958109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.958138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.958502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.958531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.958912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.958941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.959313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.959342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.959699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.959729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.960103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.960132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.960494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.960525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.960865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.960896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.961240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.961270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.961617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.961648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.961975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.962003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.962381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.962409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.962771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.962802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.963204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.963233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.963609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.963639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.963989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.964023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.964337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.964367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.964593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.964626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.964983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.965012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.965391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.965420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.965781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.965813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.966184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.966212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.966586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.966617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.966938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.966967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.967323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.967353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.967705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.967735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.968102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.968132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.968520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.968548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.968885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.968915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.969282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.969311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.969678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.969709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.970050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.970080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.970456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.970484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.970833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.970862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.971248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.971277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.564 qpair failed and we were unable to recover it. 00:30:24.564 [2024-11-15 11:53:49.971539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.564 [2024-11-15 11:53:49.971576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.971966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.971994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.972246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.972275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.972626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.972656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.973007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.973037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.973456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.973486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.973905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.973935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.974286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.974315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.974553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.974597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.974969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.974998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.975361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.975389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.975783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.975814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.976139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.976168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.976573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.976603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.976946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.976976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.977334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.977364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.977739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.977771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.978113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.978144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.978527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.978557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.978928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.978959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.979288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.979318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.979671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.979708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.980043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.980071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.980440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.980468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.980718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.980748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.980995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.981025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.981380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.981408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.981760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.981792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.982029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.982061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.982483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.982513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.982878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.982908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.983204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.983232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.983597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.983629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.983986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.984015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.984376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.984404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.984641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.984672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.985060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.985089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.985449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.985480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.985841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.565 [2024-11-15 11:53:49.985874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.565 qpair failed and we were unable to recover it. 00:30:24.565 [2024-11-15 11:53:49.986262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.986292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.986676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.986706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.987047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.987076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.987437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.987466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.987857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.987886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.988262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.988291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.988648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.988677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.989025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.989054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.989426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.989454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.989829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.989865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.990249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.990278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.990638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.990669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.991032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.991061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.991411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.991441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.991775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.991805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.992166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.992194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.992572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.992602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.992975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.993004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.993373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.993402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.993744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.993775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.994092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.994121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.994496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.994524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.994858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.994887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.995250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.995280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.995650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.995680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.996002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.996030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.996407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.996436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.996776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.996805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.997160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.997192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.997515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.997544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.997914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.997946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.998379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.998409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.998768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.998799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.999174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.999204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.999575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.999606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:49.999967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:49.999996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:50.000401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:50.000430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.566 qpair failed and we were unable to recover it. 00:30:24.566 [2024-11-15 11:53:50.000757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.566 [2024-11-15 11:53:50.000787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.001152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.001181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.001560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.001614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.001965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.001996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.002335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.002366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.003221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.003263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.003644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.003679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.004052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.004082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.004458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.004489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.004880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.004921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.005319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.005368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.005808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.005866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.006290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.006336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.006769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.006830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.007219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.007250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.007607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.007638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.008025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.008054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.009898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.009967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.010344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.010377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.010620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.010654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.010976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.011008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.011362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.011391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.011789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.011821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.012192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.012223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.012626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.012659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.013046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.013076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.013355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.013384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.013652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.013685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.013972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.014002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.014377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.014407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.014785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.014815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.015208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.015236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.015627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.015658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.016042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.016070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.016407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.016436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.016792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.567 [2024-11-15 11:53:50.016824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.567 qpair failed and we were unable to recover it. 00:30:24.567 [2024-11-15 11:53:50.017177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.017206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.017618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.017649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.017990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.018020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.018427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.018457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.018834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.018873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.019229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.019259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.019626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.019658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.020014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.020043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.020418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.020446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.020793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.020831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.021219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.021249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.021623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.021654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.021923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.021952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.022230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.022260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.022668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.022698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.023025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.023056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.023429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.023459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.023830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.023860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.024217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.024248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.024633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.024666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.025015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.025043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.025468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.025497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.025858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.025892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.026285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.026313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.026556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.026599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.027064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.027094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.027453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.027482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.027719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.027749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.028127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.028165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.028549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.028590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.028939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.028968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.029345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.029374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.029756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.029789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.030161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.030191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.568 [2024-11-15 11:53:50.030473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.568 [2024-11-15 11:53:50.030503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.568 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.030892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.030925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.031249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.031278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.031658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.031689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.032040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.032069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.032329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.032359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.032699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.032730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.033079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.033111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.033464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.033493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.033926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.033958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.034326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.034355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.034721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.034759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.035130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.035160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.035523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.035554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.035925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.035955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.036307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.036336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.036708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.036739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.037076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.037105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.037465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.037494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.037856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.037887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.038170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.038199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.038599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.038631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.038995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.039023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.039346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.039374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.039734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.039766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.040102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.040131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.040520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.040549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.040906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.840 [2024-11-15 11:53:50.040935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.840 qpair failed and we were unable to recover it. 00:30:24.840 [2024-11-15 11:53:50.042677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.042745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.043202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.043238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.043522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.043557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.045848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.045918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.046355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.046392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.046772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.046806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.047171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.047202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.047588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.047619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.047967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.047996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.048355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.048385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.048713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.048743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.049106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.049135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.049490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.049520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.049878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.049910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.050275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.050306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.050657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.050688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.051061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.051090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.051459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.051488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.051856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.051886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.052239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.052270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.052625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.052655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.054838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.054910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.055275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.055312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.055688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.055721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.056106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.056136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.056497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.056526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.056892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.056922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.057235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.057264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.057655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.057690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.057963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.058005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.058360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.058410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.058805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.058860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.059131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.059177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.059608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.059655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.060110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.060141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.060422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.060451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.060863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.060894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.061227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.061256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.841 [2024-11-15 11:53:50.061619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.841 [2024-11-15 11:53:50.061650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.841 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.061925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.061954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.062307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.062336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.062696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.062726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.063046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.063075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.063434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.063463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.063871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.063902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.064262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.064293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.064632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.064665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.064922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.064952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.065327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.065356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.065696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.065727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.066061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.066089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.066488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.066523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.066874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.066905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.067246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.067276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.067604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.067634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.068028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.068056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.068412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.068441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.068753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.068784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.069119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.069149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.069505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.069535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.069920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.069951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.070269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.070297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.070541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.070588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.070932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.070962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.071251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.071279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.071600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.071632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.071872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.071900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.072262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.072290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.072578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.072613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.073001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.073030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.073408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.073437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.073801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.073835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.074199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.074228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.074547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.074594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.074972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.075001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.075347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.075376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.075714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.842 [2024-11-15 11:53:50.075745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.842 qpair failed and we were unable to recover it. 00:30:24.842 [2024-11-15 11:53:50.076080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.076110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.076426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.076458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.076843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.076875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.077259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.077288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.077459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.077488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.077841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.077870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.078261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.078290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.078640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.078669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.079000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.079028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.079345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.079373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.079736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.079766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.080162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.080191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.080534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.080936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.080966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.081132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.081164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.081519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.081554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.081996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.082029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.082404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.082433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.082676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.082707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.083067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.083096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.083490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.083519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.083895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.083926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.084355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.084384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.084738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.084770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.085169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.085198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.085357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.085385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.085716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.085747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.086073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.086102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.086488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.086517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.086871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.086902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.087277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.087305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.087544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.087583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.087855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.087883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.088261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.088289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.088659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.088690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.089020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.089049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.089397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.089426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.089772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.089804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.090147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.843 [2024-11-15 11:53:50.090176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.843 qpair failed and we were unable to recover it. 00:30:24.843 [2024-11-15 11:53:50.090536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.090578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.091025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.091054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.091423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.091451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.091707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.091743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.092105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.092134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.092467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.092497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.092893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.092924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.093283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.093311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.093586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.093617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.094020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.094049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.094428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.094457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.094872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.094903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.095266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.095297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.095670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.095699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.096039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.096067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.096423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.096451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.096790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.096820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.097186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.097216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.097550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.097594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.097856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.097885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.098232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.098263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.098538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.098580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.098906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.098935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.099335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.099363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.101723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.101794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.102234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.102269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.102638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.102670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.103064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.103093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.103343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.103372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.103623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.103653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.104013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.104042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.844 [2024-11-15 11:53:50.104404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.844 [2024-11-15 11:53:50.104434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.844 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.104778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.104809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.105213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.105598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.105628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.105998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.106027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.106377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.106406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.106788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.106819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.107048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.107076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.107442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.107471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.107870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.107900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.108305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.108334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.108686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.108716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.109067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.109096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.109458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.109492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.109819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.109850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.110173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.110202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.110589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.110620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.110977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.111009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.111249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.111279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.111594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.111625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.111988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.112016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.112375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.112404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.112736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.112767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.113138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.113167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.113501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.113530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.113991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.114022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.114377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.114405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.114767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.114797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.115159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.115187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.115560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.115606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.115921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.115951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.116286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.116315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.116669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.116700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.117063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.117093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.117454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.117483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.117721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.845 [2024-11-15 11:53:50.117762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.845 qpair failed and we were unable to recover it. 00:30:24.845 [2024-11-15 11:53:50.118144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.118172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.118541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.118581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.118934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.118963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.119291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.119319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.119684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.119720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.120085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.120116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.120466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.120494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.120821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.120851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.121217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.121247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.121600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.121629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.121957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.121986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.122305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.122336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.122594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.122624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.122971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.122999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.123321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.123350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.123669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.123699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.124084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.124114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.124477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.124506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.124905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.124940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.125283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.125312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.125678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.125709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.126064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.126093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.126357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.126385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.126800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.126830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.127110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.127139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.127506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.127536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.127883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.127913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.128226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.128258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.128633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.128664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.129008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.129037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.129345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.129375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.129756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.129785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.130140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.130170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.130535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.130575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.130927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.130957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.846 [2024-11-15 11:53:50.131297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.846 [2024-11-15 11:53:50.131326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.846 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.131586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.131621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.131993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.132022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.132439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.132468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.132853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.132885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.133260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.133288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.133534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.133593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.133929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.133960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.134340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.134368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.134710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.134741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.135109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.135145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.135550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.135590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.135835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.135868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.136232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.136262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.136622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.136652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.136982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.137011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.137243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.137275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.137597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.137627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.137956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.137986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.138335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.138364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.138690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.138722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.139010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.139039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.139406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.139434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.139790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.139820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.140192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.140221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.140583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.140614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.140983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.141013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.141364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.141392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.141697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.141728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.142082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.142112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.142463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.142491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.142820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.142851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.143250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.143279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.143629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.143659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.144028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.144057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.144423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.144451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.144835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.144866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.847 [2024-11-15 11:53:50.145219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.847 [2024-11-15 11:53:50.145254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.847 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.145623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.145654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.146011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.146040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.146412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.146440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.146828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.146859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.147221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.147252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.147633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.147663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.148018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.148047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.148392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.148421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.148814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.148843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.149212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.149242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.149602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.149632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.150050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.150079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.150443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.150473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.150875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.150908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.151303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.151333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.151683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.151714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.152103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.152132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.152483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.152512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.152771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.152801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.153165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.153194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.153444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.153473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.153824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.153854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.154197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.154227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.154608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.154639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.155044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.155072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.155433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.155462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.155855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.155886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.156170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.156198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.156444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.156821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.156852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.157187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.157215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.157572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.157603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.157958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.157988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.158361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.158389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.158635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.158666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.159086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.848 [2024-11-15 11:53:50.159116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.848 qpair failed and we were unable to recover it. 00:30:24.848 [2024-11-15 11:53:50.159477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.159507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.159884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.159915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.160287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.160317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.160769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.160802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.161137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.161173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.161513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.161542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.162217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.162258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.162637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.162675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.163046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.163077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.163436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.163468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.163754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.163785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.164144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.164174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.164532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.164573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.164916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.164945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.165305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.165335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.165658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.165690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.166029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.166058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.166439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.166468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.166718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.166751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.167143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.167173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.167415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.167443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.167900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.167931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.168273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.168303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.168587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.168618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.169010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.169039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.169361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.169391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.169646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.169676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.169947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.169979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.170395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.170426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.170685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.170715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.849 [2024-11-15 11:53:50.171119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.849 [2024-11-15 11:53:50.171149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.849 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.171516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.171546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.171852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.171882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.172224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.172254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.172619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.172650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.173034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.173063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.173422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.173451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.173817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.173847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.174188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.174218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.174613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.174646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.175034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.175063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.175402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.175431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.175836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.175867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.176225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.176255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.176609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.176641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.177058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.177088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.177442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.177471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.177881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.177912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.178293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.178322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.178652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.178683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.179003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.179033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.179297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.179326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.179675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.179707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.180084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.180113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.180505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.180534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.180929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.180960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.181321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.181349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.181641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.181671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.181990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.182018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.182380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.182410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.182818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.182848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.183221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.183249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.183627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.183657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.183811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.183842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.850 [2024-11-15 11:53:50.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.850 [2024-11-15 11:53:50.184236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.850 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.184593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.184624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.184930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.184958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.185291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.185320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.185617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.185646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.185896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.185925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.186255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.186284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.186553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.186596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.186876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.186911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.187290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.187319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.187588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.187618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.187991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.188020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.188406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.188435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.188901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.188934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.189184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.189213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.189581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.189613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.189937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.189966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.190330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.190359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.190658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.190688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.191053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.191081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.191466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.191494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.191862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.191892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.192142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.192174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.192500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.192529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.192944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.192975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.193317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.193345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.193681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.193710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.193936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.193969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.194346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.194375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.194656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.194687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.195056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.195085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.195445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.195474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.195866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.195896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.196274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.196302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.196635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.196664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.196914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.196942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.197293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.197325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.851 [2024-11-15 11:53:50.197600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.851 [2024-11-15 11:53:50.197631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.851 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.197870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.197899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.198253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.198281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.198645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.198676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.199066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.199094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.199468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.199498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.199866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.199897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.200258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.200286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.200676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.200706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.200967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.200998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.201362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.201391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.201655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.201685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.202060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.202096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.202439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.202467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.202871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.202901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.203257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.203285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.203629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.203660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.204078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.204106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.204459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.204490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.204886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.204918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.205283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.205312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.205705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.205735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.206093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.206120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.206504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.206534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.206938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.206970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.207311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.207340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.207635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.207666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.207996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.208024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.208254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.208282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.208634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.208664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.209051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.209081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.209470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.209500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.209918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.209948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.210303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.210330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.210687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.210719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.211069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.211099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.211464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.852 [2024-11-15 11:53:50.211493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.852 qpair failed and we were unable to recover it. 00:30:24.852 [2024-11-15 11:53:50.211831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.211862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.212261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.212290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.212656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.212690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.212958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.212987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.213229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.213260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.213695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.213726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.214088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.214116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.214473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.214501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.214819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.214849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.215201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.215231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.215630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.215921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.215949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.216286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.216315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.216619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.216649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.217028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.217057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.217418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.217446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.217830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.217861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.218204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.218232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.218484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.218512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.218871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.218900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.219227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.219255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.219622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.219651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.219989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.220018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.220406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.220435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.220704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.220733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.221114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.221142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.221387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.221415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.221642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.221674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.222077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.222106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.222400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.222429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.222854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.222885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.223211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.223239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.223556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.223601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.223962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.223990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.224279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.224308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.224632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.224663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.853 qpair failed and we were unable to recover it. 00:30:24.853 [2024-11-15 11:53:50.225039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.853 [2024-11-15 11:53:50.225066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.225425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.225453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.225723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.225752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.226120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.226150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.226530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.226559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.226954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.226982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.227329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.227357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.227741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.227776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.227954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.227986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.228238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.228266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.228624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.228655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.229041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.229376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.229405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.229792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.229821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.230166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.230194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.230561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.230601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.230989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.231019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.231357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.231386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.231785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.231815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.232148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.232175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.232486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.232515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.232796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.232826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.233202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.233232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.233608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.233638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.233963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.233991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.234245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.234273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.234651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.234680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.235051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.235079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.235333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.235361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.235750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.235780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.236244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.236272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.236639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.236669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.237008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.854 [2024-11-15 11:53:50.237036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.854 qpair failed and we were unable to recover it. 00:30:24.854 [2024-11-15 11:53:50.237415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.237444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.237837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.237875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.238223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.238251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.238505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.238536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.238900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.238930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.239277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.239306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.239707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.239737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.240064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.240093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.240466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.240495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.240874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.240903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.241245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.241275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.241555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.241599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1264190 Killed "${NVMF_APP[@]}" "$@" 00:30:24.855 [2024-11-15 11:53:50.242038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.242067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.242401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.242431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.242751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.242783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:24.855 [2024-11-15 11:53:50.243040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.243069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:24.855 [2024-11-15 11:53:50.243469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.243498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:24.855 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:24.855 [2024-11-15 11:53:50.243880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.243909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.855 [2024-11-15 11:53:50.244290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.244320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.244677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.244708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.244971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.244999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.245363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.245391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.245773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.245804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.246144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.246172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.246530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.246560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.246946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.246977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.247355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.247384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.247653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.247683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.248043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.248072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.248441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.248468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.248846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.248876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.249261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.249291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.249644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.249674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.250098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.250128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.250540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.250578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.250868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.855 [2024-11-15 11:53:50.250897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.855 qpair failed and we were unable to recover it. 00:30:24.855 [2024-11-15 11:53:50.251276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.251306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.251679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.251709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.252063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.252091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.252416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.252445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1265050 00:30:24.856 [2024-11-15 11:53:50.252835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.252867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1265050 00:30:24.856 [2024-11-15 11:53:50.253136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.253166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 1265050 ']' 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:24.856 [2024-11-15 11:53:50.253541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.253588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.856 [2024-11-15 11:53:50.253883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.253915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.856 [2024-11-15 11:53:50.254279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.254309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:24.856 [2024-11-15 11:53:50.254527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.254557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 11:53:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.856 [2024-11-15 11:53:50.255012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.255042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.255432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.255463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.255821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.255859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.256230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.256260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.256512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.256541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.256933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.256965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.257346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.257375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.257612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.257643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.258055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.258085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.258460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.258492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.258726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.258756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.259002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.259035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.259364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.259395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.259785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.259817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.260226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.260257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.260628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.260660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.261063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.261094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.261459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.261490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.261906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.261940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.262290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.262321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.262719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.262752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.262987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.263017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.263373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.263403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.263796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.263827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.856 qpair failed and we were unable to recover it. 00:30:24.856 [2024-11-15 11:53:50.264224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.856 [2024-11-15 11:53:50.264254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.264468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.264498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.264890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.264921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.265175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.265206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.265574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.265606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.266077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.266108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.266471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.266502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.266815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.266851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.267204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.267235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.267487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.267516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.267932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.267964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.268342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.268374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.268734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.268765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.269032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.269060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.269497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.269527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.269792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.269824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.270192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.270221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.270600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.270630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.270996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.271024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.271392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.271424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.271761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.271794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.272180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.272209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.272589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.272619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.272993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.273021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.273394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.273423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.273689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.273719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.274090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.274120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.274361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.274391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.276260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.276323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.276710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.276744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.277103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.277133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.277391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.277419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.277724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.277755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.278154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.278185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.278558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.278600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.278782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.278810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.279205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.279235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.279604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.279633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.857 [2024-11-15 11:53:50.280031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.857 [2024-11-15 11:53:50.280061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.857 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.280450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.280480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.280758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.280788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.281127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.281155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.281306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.281335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.281685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.281715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.282109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.282138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.282503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.282532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.282971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.283008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.283253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.283283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.283548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.283602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.283909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.283939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.284281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.284311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.284583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.284614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.284987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.285015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.285405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.285436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.285809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.285840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.286217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.286246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.286469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.286499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.286866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.286896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.287276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.287304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.287690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.287720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.288091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.288119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.288511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.288540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.288816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.288846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.289073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.289101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.289494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.289523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.289922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.289954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.290327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.290356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.290738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.290773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.291136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.291165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.291559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.291601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.291827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.291857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.292094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.292123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.292356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.292385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.292786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.292816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.293186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.293215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.293584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.293613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.293976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.294005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.858 [2024-11-15 11:53:50.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.858 [2024-11-15 11:53:50.294418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.858 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.294800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.294829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.295179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.295208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.295589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.295619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.295786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.295814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.296145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.296174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.296549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.296590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.296855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.296883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.297247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.297275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.297648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.297678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.298054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.298083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.298458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.298487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.298878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.298910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.299290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.299318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.299781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.299811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.300178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.300208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.300557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.300971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.301000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.301374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.301404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.301787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.301817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.302195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.302223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.302574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.302606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.302839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.302872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.303288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.303317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.303561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.303609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.303877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.303907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.304295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.304324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.304475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.304505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.304886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.304917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.305191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.305219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.305606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.305639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.305989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.306019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.306397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.306428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.306808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.306839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.859 qpair failed and we were unable to recover it. 00:30:24.859 [2024-11-15 11:53:50.307079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.859 [2024-11-15 11:53:50.307110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.307492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.307521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.307885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.307916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.308272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.308308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.308617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.308650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.309022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.309051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.309311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.309342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.309593] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:30:24.860 [2024-11-15 11:53:50.309671] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.860 [2024-11-15 11:53:50.309696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.309729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.310106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.310136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.310508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.310536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.310939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.310974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.311232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.311260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.311719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.311750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.312010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.312039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.312269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.312297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.312612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.313043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.313074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.313438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.313469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.313800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.314212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.314243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.314618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.314648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.315041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.315070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.315398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.315427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.315777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.316181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.316209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.316475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.316505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.316843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.316873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.317063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.317093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.317469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.317499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.317884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.317924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.318251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.318281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.318652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.318684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.319065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.319095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.319478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.319507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.319808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.319843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.320202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.320233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.320463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.320492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.320850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.860 [2024-11-15 11:53:50.320881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.860 qpair failed and we were unable to recover it. 00:30:24.860 [2024-11-15 11:53:50.321231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.321262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.321681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.321712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.322101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.322131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.322460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.322492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.322855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.322887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.323235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.323266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.323626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.323658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.324010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.324041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.324371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.324401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.324655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.324685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.325072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.325106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.325440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.325471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:24.861 [2024-11-15 11:53:50.325712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.861 [2024-11-15 11:53:50.325746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:24.861 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.326070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.326104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.326503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.326533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.326761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.326793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.327119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.327150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.327501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.327533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.328044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.328075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.328420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.328452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.328898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.328931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.329180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.329210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.329454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.329484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.329912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.329944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.330290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.330321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.330697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.330728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.331116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.331147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.331394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.331427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.331827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.331858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.332264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.332584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.332615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.333038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.333068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.333400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.333438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.333803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.333836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.334197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.334227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.334576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.137 [2024-11-15 11:53:50.334608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.137 qpair failed and we were unable to recover it. 00:30:25.137 [2024-11-15 11:53:50.334978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.335010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.335380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.335415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.335671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.335702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.336048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.336079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.336464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.336494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.336738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.336772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.337112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.337142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.337490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.337522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.337957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.337988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.338368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.338398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.338664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.338698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.339065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.339094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.339456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.339486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.339743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.339774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.340003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.340041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.340408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.340439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.340828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.340858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.341217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.341246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.341625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.341657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.342040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.342071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.342458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.342488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.342716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.342750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.343124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.343153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.343527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.343576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.343970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.344002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.344385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.344415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.344771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.344803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.345203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.345234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.345622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.345652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.346019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.346052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.346324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.346354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.346775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.346806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.347084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.347113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.347485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.347514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.347936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.347967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.348338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.348369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.348631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.348663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.349038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.349068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.138 [2024-11-15 11:53:50.349438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.138 [2024-11-15 11:53:50.349468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.138 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.349834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.349866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.350244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.350274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.350630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.350661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.350910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.350939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.351327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.351356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.351606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.351637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.352026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.352055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.352475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.352505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.352893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.352924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.353311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.353341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.353734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.353767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.354018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.354050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.354436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.354468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.354842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.354872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.355254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.355283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.355669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.355699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.356043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.356072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.356440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.356467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.356823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.356853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.357233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.357262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.357622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.357652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.358020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.358049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.358411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.358441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.358706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.358738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.359111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.359139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.359386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.359420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.359785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.359815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.360197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.360226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.360508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.360537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.361006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.361036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.361404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.361433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.361610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.361643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.362026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.362056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.362284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.362315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.362653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.362684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.363059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.363088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.363452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.363480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.139 [2024-11-15 11:53:50.363824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.139 [2024-11-15 11:53:50.363854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.139 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.364218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.364247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.364616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.364647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.364885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.364913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.365269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.365298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.365714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.365745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.366028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.366056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.366375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.366404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.366772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.366803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.367160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.367190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.367585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.367616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.367982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.368011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.368361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.368389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.368656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.368686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.369102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.369132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.369501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.369538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.369876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.369906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.370360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.370389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.370734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.370764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.371164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.371193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.371448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.371478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.371661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.371692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.372038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.372067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.372395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.372423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.372649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.372679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.373049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.373078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.373446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.373475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.373699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.373730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.373983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.374012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.374452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.374480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.374812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.374842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.375195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.375224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.375597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.375627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.375908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.375936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.376303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.376331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.376606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.376636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.376988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.377017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.377396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.377429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.377797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.377827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.140 qpair failed and we were unable to recover it. 00:30:25.140 [2024-11-15 11:53:50.378195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.140 [2024-11-15 11:53:50.378226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.378608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.378641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.378867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.378898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.379278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.379308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.379438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.379467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.379865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.379895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.380269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.380297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.380685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.380714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.381081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.381110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.381338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.381367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.381738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.381767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.382211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.382608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.382639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.383001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.383030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.383392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.383422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.383788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.383819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.384197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.384227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.384601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.384638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.385020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.385050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.385426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.385785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.385816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.386048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.386076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.386444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.386471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.386936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.386968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.387332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.387362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.387747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.387776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.388120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.388149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.388525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.388554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.388834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.388863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.389228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.389257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.389629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.389658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.389834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.389862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.390116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.390144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.141 qpair failed and we were unable to recover it. 00:30:25.141 [2024-11-15 11:53:50.390522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.141 [2024-11-15 11:53:50.390550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.390931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.390961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.391328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.391356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.391768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.391799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.392187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.392215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.392585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.392616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.392906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.392934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.393327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.393358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.393738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.393769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.394128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.394156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.394536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.394576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.394946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.395339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.395368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.395735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.395766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.396187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.396216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.396540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.396580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.396958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.396987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.397411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.397439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.397780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.397810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.398172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.398202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.398585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.398614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.398976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.399004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.399357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.399385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.399783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.399814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.400150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.400180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.400549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.400592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.401000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.401029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.401389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.401418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.401791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.401823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.402049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.402081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.402413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.402443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.402581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.402615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.402885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.402914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.403273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.403301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.403554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.403612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.404057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.404086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.404473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.404502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.404937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.404968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.142 [2024-11-15 11:53:50.405325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.142 [2024-11-15 11:53:50.405354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.142 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.405724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.405754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.406012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.406041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.406283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.406312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.406685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.406714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.407092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.407122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.407395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.407424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.407808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.407837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.408286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.408619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.408649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.409018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.409047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.409389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.409417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.409810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.409840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.410104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.410136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.410452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.410487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.410834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.410865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.411226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.411256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.411615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.411645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.411882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.411912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.412245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.412273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.412540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.412584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.412824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.412858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.413260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.413288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.413688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.413719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.414022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.414051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.414424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.414453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.414829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.414860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.415012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.415044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.415271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.415300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.415693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.415724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.416102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.416131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.416247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:25.143 [2024-11-15 11:53:50.416547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.416609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.416971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.417000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.417379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.417408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.417749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.417779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.418149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.418177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.418546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.418586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.418872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.418901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.419294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.143 [2024-11-15 11:53:50.419322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.143 qpair failed and we were unable to recover it. 00:30:25.143 [2024-11-15 11:53:50.419694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.419724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.420114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.420144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.420552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.420605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.420952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.420981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.421234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.421263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.421625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.421657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.421915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.421944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.422293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.422321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.422610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.422642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.423029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.423059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.423447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.423477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.423831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.423863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.424245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.424274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.424653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.424683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.424918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.424947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.425360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.425391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.425694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.425725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.426105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.426134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.426371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.426401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.426734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.426764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.427114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.427143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.427494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.427525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.427945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.427976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.428349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.428378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.428769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.428799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.429189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.429218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.429611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.429643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.429998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.430028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.430390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.430420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.430785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.430823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.431192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.431222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.431595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.431626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.431988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.432016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.432267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.432298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.432658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.432690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.433088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.433118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.433494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.433523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.433829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.433859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.144 [2024-11-15 11:53:50.434245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.144 [2024-11-15 11:53:50.434275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.144 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.434621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.435005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.435035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.435391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.435420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.435808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.435839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.436229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.436259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.436503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.436533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.436908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.436938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.437108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.437135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.437499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.437530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.437943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.437975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.438326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.438355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.438715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.438749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.439118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.439147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.439401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.439430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.439794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.439824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.440197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.440226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.440601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.440632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.441003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.441398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.441428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.441857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.441888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.442260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.442289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.442654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.442685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.443051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.443079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.443304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.443333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.443604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.443637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.443995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.444025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.444387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.444417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.444667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.444698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.445061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.445090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.445460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.445489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.445892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.445923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.446284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.446319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.446678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.446709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.447076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.447106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.447465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.447492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.447901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.447933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.448319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.448350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.448606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.145 [2024-11-15 11:53:50.448636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.145 qpair failed and we were unable to recover it. 00:30:25.145 [2024-11-15 11:53:50.449018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.449047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.449443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.449473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.449798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.449829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.450050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.450081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.450437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.450466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.450704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.450735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.451128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.451157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.451539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.451580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.451938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.451967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.452338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.452368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.452634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.452664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.453028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.453058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.453445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.453852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.453883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.454266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.454294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.454658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.454688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.455041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.455069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.455429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.455460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.455839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.455870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.456227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.456255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.456639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.456675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.457010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.457040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.457306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.457336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.457580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.457610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.458004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.458033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.458428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.458458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.458831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.458862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.459204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.459231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.459604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.459634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.460002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.460032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.460390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.460419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.460781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.460811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.146 qpair failed and we were unable to recover it. 00:30:25.146 [2024-11-15 11:53:50.461191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.146 [2024-11-15 11:53:50.461220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.461629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.461659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.461882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.461912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.462251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.462281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.462655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.462685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.463048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.463078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.463441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.463471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.463830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.463860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.464224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.464255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.464513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.464543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.464948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.464979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.465351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.465731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.465761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.466167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.466195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.466325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.466355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.466747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.466778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.467189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.467219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.467451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.467480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.467708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.467738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.468136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.468167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.468530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.468560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.468828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.468857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.468880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:25.147 [2024-11-15 11:53:50.468930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:25.147 [2024-11-15 11:53:50.468938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:25.147 [2024-11-15 11:53:50.468946] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:25.147 [2024-11-15 11:53:50.468952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:25.147 [2024-11-15 11:53:50.469194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.469224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.469445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.469473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.469705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.469735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.470093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.470121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.470505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.470533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.470893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.470928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.471145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:25.147 [2024-11-15 11:53:50.471301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.471331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.471300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:25.147 [2024-11-15 11:53:50.471467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:25.147 [2024-11-15 11:53:50.471468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:25.147 [2024-11-15 11:53:50.471693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.471724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.472125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.472154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.472528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.472557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.472835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.472865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.473233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.473263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.473542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.473597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.473984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.147 [2024-11-15 11:53:50.474013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.147 qpair failed and we were unable to recover it. 00:30:25.147 [2024-11-15 11:53:50.474387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.474416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.474856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.474885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.475282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.475311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.475544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.475588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.475825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.475858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.476259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.476290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.476650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.476680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.477103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.477133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.477397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.477426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.477672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.477706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.478046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.478077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.478444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.478473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.478726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.478756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.479146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.479176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.479474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.479503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.479865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.479895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.480253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.480283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.480656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.480694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.480936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.480965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.481206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.481236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.481475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.481504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.481917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.481947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.482180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.482208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.482592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.482624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.482857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.482886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.483283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.483312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.483644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.483675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.484084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.484113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.484516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.484546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.484897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.484927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.485264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.485293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.485596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.485627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.486013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.486042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.486296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.486326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.486696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.486725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.487093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.487123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.487498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.487527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.487660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.487687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.148 [2024-11-15 11:53:50.488103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.148 [2024-11-15 11:53:50.488132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.148 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.488489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.488518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.488897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.488927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.489259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.489289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.489524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.489552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.489957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.489988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.490235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.490270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.490502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.490531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.490673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.490702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.491042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.491071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.491444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.491474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.491856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.491888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.492272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.492300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.492682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.492711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.493069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.493097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.493528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.493558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.494033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.494063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.494430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.494459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.494825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.494854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.495106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.495136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.495375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.495405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.495829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.495861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.496237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.496266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.496653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.496683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.497035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.497064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.497438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.497467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.497823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.497853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.498268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.498298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.498687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.498717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.499084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.499113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.499242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.499269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.499526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.499556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.499817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.499846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.500170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.500199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.500453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.500484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.500707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.500738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.501096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.501126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.501372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.501789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.149 [2024-11-15 11:53:50.501820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.149 qpair failed and we were unable to recover it. 00:30:25.149 [2024-11-15 11:53:50.502212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.502241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.502620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.502652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.502986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.503015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.503382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.503411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.503809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.503839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.504192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.504222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.504596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.504626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.504985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.505015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.505402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.505438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.505813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.505844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.506211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.506241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.506609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.506638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.507002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.507032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.507389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.507419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.507815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.507846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.508255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.508285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.508505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.508537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.508667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.508696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.508940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.508968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.509234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.509653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.509685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.510039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.510067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.510442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.510471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.510731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.510760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.511130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.511160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.511552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.511595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.511864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.512128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.512157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.512500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.512529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.512925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.512956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.513180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.513208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.513639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.514018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.514049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.514396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.514425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.514670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.514701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.515073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.515102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.515523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.515552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.515938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.150 [2024-11-15 11:53:50.515967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.150 qpair failed and we were unable to recover it. 00:30:25.150 [2024-11-15 11:53:50.516364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.516394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.516645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.516676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.517027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.517057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.517415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.517444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.517791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.517824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.518110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.518138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.518377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.518408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.518680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.518710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.519101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.519129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.519542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.519604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.519711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.519738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.520272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.520388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.520856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.520957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.521370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.521407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.521783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.521814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.522153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.522183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.522413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.522442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.522654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.522684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.523070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.523100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.523438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.523467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.523820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.523855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.524201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.524230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.524668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.524699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.524939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.524968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.525358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.525401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.525814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.525846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.526221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.526249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.526603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.526635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.526983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.527012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.527382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.527412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.527659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.527689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.527919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.527949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.528289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.528317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.528709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.528739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.151 qpair failed and we were unable to recover it. 00:30:25.151 [2024-11-15 11:53:50.528938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.151 [2024-11-15 11:53:50.528968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.529213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.529242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.529582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.529614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.529978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.530006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.530352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.530380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.530776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.530807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.531066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.531094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.531428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.531456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.531676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.531706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.531946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.531974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.532111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.532140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.532500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.532530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.532758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.532788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.533037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.533390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.533419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.533776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.533807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.534262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.534292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.534661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.534692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.535046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.535398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.535426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.535650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.535679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.536041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.536069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.536410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.536440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.536787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.536818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.537185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.537214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.537582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.537612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.537946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.537975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.538353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.538382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.538607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.538637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.538862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.538890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.539231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.539266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.539639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.539669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.540070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.540098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.540330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.540358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.540613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.540644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.540995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.541024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.541371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.541399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.541632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.152 [2024-11-15 11:53:50.541662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.152 qpair failed and we were unable to recover it. 00:30:25.152 [2024-11-15 11:53:50.541911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.541940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.542207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.542236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.542597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.542627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.542951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.542979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.543350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.543379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.543732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.543761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.543882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.543911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.544212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.544240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.544600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.544629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.544967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.544995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.545361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.545392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.545769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.545799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.546118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.546146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.546512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.546539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.546916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.546945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.547331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.547359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.547697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.547727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.547969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.547997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.548310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.548339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.548554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.548598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.548947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.548975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.549352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.549380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.549648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.549677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.550069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.550097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.550463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.550491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.550752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.550781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.551154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.551182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.551560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.551601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.551930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.551960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.552244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.552271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.552639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.552670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.553006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.553034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.553368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.553397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.553619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.553650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.554073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.554101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.554485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.554514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.554876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.554907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.555153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.555182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.153 qpair failed and we were unable to recover it. 00:30:25.153 [2024-11-15 11:53:50.555529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.153 [2024-11-15 11:53:50.555557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.555941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.555970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.556327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.556355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.556714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.556745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.557114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.557143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.557508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.557537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.557914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.557944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.558150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.558178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.558549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.558586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.558807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.558835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.559208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.559238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.559600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.559630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.559985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.560014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.560182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.560209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.560499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.560533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.560885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.560915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.561139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.561169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.561522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.561553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.561665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.561694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.562219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.562249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.562604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.562635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.562870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.562904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.563272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.563301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.563540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.563581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.563919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.563948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.564178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.564206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.564491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.564519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.564947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.564976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.565190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.565218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.565549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.565807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.565836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.566249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.566277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.566503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.566530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.566941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.566971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.567189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.567217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.567583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.567613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.567983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.568012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.568254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.568282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.154 [2024-11-15 11:53:50.568486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.154 [2024-11-15 11:53:50.568514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.154 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.568922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.568952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.569334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.569365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.569712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.569741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.570103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.570131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.570497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.570526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.570900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.570931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.571207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.571235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.571452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.571480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.571835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.571865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.572231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.572258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.572640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.572670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.573014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.573042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.573282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.573309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.573688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.573717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.573926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.573954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.574360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.574388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.574745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.574774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.574868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.574894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.575238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.575266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.575527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.575554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.575942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.575970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.576182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.576210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.576560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.576607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.576930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.576958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.577291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.577321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.577696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.577727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.578088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.578115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.578462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.578491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.578823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.578853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.579221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.579249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.579627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.579657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.580027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.580056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.580386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.155 [2024-11-15 11:53:50.580414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.155 qpair failed and we were unable to recover it. 00:30:25.155 [2024-11-15 11:53:50.580798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.580827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.581033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.581060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.581420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.581449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.581675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.581706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.582099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.582127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.582498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.582526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.582677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.582891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.582923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.583303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.583332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.583695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.583725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.584115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.584144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.584399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.584428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.584816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.584845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.585619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.585650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.586038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.586066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.586290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.586320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.586590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.586623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.586997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.587026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.587358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.587386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.587783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.587813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.588211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.588238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.588620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.588898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.588927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.589323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.589351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.589728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.589759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.590129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.590157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.590491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.590520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.590926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.590957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.591307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.591341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.591707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.591737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.592000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.592028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.592408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.592438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.592791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.592821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.593197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.593226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.593548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.593586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.593977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.594005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.594257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.156 [2024-11-15 11:53:50.594285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.156 qpair failed and we were unable to recover it. 00:30:25.156 [2024-11-15 11:53:50.594489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.594517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.594939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.594968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.595192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.595222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.595543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.595587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.595701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.595732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.596076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.596104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.596507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.596535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.596945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.596975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.597344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.597373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.597703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.597733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.598134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.598162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.598424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.598451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.598721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.598750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.599118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.599146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.599520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.599549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.599928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.599959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.600301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.600329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.600539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.600581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.600945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.600974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.601347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.601376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.601600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.601630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.601856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.601884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.602117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.602145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.602499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.602526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.602885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.602914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.603134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.603161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.603536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.603571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.603920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.603948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.604205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.604237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.604560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.604596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.604931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.604959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.605319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.605353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.605743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.605773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.606106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.606134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.606379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.606408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.606875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.606905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.607255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.607283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.607483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.607512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.157 qpair failed and we were unable to recover it. 00:30:25.157 [2024-11-15 11:53:50.607907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.157 [2024-11-15 11:53:50.607936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.608315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.608344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.608697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.608726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.609070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.609098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.609468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.609495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.609745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.609774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.610038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.610068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.610438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.610468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.610707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.610739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.611110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.611142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.611522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.611550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.611942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.611970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.612195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.612224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.612588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.612620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.612990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.613018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.613403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.613432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.613847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.613877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.614112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.614141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.614389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.614418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.614637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.614667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.615057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.615085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.615458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.615489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.615824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.615854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.616198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.616225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.616571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.616600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.617035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.617064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.617449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.617477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.617815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.617845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.158 [2024-11-15 11:53:50.618194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.158 [2024-11-15 11:53:50.618222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.158 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.618595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.618627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.618999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.619029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.619381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.619410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.619750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.619780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.620158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.620193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.620539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.620596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.620806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.620835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.621201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.621229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.621602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.621648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.621981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.622010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.622354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.622382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.622606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.622637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.622945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.622973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.623307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.623335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.623691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.623720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.624077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.624105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.624468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.624498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.624768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.624801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.625057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.625086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.625426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.625454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.625809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.625839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.626064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.626093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.626448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.626477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.626840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.626870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.627237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.627265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.627643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.627672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.432 [2024-11-15 11:53:50.627995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.432 [2024-11-15 11:53:50.628023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.432 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.628363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.628392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.628754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.628784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.629169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.629196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.629526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.629554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.629846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.629875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.630255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.630282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.630648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.630678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.631061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.631090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.631468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.631496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.631872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.631902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.632265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.632292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.632627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.632656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.633026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.633054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.633266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.633295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.633653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.633682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.633995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.634023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.634386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.634414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.634681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.634717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.635008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.635036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.635368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.635397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.635769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.635797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.636187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.636215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.636592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.636622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.636975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.637003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.637386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.637415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.637680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.637713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.637916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.637945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.638333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.638361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.638722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.638751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.639163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.639190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.639570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.639599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.639955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.639983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.640364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.640392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.640640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.640672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.640916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.640944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.641285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.641313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.641591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.433 [2024-11-15 11:53:50.641621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.433 qpair failed and we were unable to recover it. 00:30:25.433 [2024-11-15 11:53:50.641848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.641877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.642255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.642282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.642618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.642648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.642872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.642900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.643352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.643379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.643776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.643804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.643900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.643928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.644175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.644203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.644445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.644475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.644839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.644869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.645135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.645167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.645526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.645554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.645804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.645833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.646215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.646243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.646500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.646530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.646777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.646808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.647128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.647157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.647514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.647542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.647883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.647912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.648286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.648314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.648689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.648725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.648944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.648972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.649237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.649270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.649635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.649665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.650027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.650055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.650270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.650298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.650655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.650684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.651066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.651095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.651468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.651497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.651744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.651774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.652144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.652172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.652550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.652588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.652933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.652962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.653304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.653333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.653547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.653586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.653846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.653874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.654118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.654146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.654428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.434 [2024-11-15 11:53:50.654456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.434 qpair failed and we were unable to recover it. 00:30:25.434 [2024-11-15 11:53:50.654792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.654823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.655079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.655108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.655471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.655499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.655598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.655626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa100000b90 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.656227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.656342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.656709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.656750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.657040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.657070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.657342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.657371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.657647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.657684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.658067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.658100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.658437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.658466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.658861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.658896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.659313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.659342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.659595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.659626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.660017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.660047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.660466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.660497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.660764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.660797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.661175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.661204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.661599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.661630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.661873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.661904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.662267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.662296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.662679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.662712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.662944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.662971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.663390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.663420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.663776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.663808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.664180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.664208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.664430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.664460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.664829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.664862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.665220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.665248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.665638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.665670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.666085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.666115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.666349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.666379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.666782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.666814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.667197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.667229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.667558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.667602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.667974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.668003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.668347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.668382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.668626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.668656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.435 qpair failed and we were unable to recover it. 00:30:25.435 [2024-11-15 11:53:50.669042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.435 [2024-11-15 11:53:50.669071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.669322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.669352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.669741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.669772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.670154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.670182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.670581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.670610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.671053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.671082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.671467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.671497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.671879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.671908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.672156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.672184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.672625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.672657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.673032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.673061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.673304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.673333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.673601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.673634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.674045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.674075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.674343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.674373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.674718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.674748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.675116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.675144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.675511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.675540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.675916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.675946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.676189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.676220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.676591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.676622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.676839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.676872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.677237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.677266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.677648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.677680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.677939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.677967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.678359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.678389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.678505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.678534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.678943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.678973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.679074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.679102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.679416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.679445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.679658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.679688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.679912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.436 [2024-11-15 11:53:50.679942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.436 qpair failed and we were unable to recover it. 00:30:25.436 [2024-11-15 11:53:50.680303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.680331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.680731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.680762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.681116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.681146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.681535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.681582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.681834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.681863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.682219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.682247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.682625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.682656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.683034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.683070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.683489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.683518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.683853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.683883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.684164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.684193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.684533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.684569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.684950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.684978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.685363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.685393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.685762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.685791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.686027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.686055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.686465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.686493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.686857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.686888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.687126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.687154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.687451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.687481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.687842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.687872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.688264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.688292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.688664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.688694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.689003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.689033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.689275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.689305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.689707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.689738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.689989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.690017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.690395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.690423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.690814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.690844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.691207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.691235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.691594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.691623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.691893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.691925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.692161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.692189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.692450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.692479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.692843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.692881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.693232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.693261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.693612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.693641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.694006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.694035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.437 qpair failed and we were unable to recover it. 00:30:25.437 [2024-11-15 11:53:50.694404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.437 [2024-11-15 11:53:50.694433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.694797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.694828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.695220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.695248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.695484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.695513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.695865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.695894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.696246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.696277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.696542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.696582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.696942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.696971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.697341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.697370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.697645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.697674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.698077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.698106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.698350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.698378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.698732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.698763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.699116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.699145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.699534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.699570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.699947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.699977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.700187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.700215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.700575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.700604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.700834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.700862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.701235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.701264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.701688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.701716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.702080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.702108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.702443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.702472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.702709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.702738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.703120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.703150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.703497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.703752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.703782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.704134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.704162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.704534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.704569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.704744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.704771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.705205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.705233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.705625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.705656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.706074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.706103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.706463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.706491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.706869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.706899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.707287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.707316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.707683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.707713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.707952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.707994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.708337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.438 [2024-11-15 11:53:50.708365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.438 qpair failed and we were unable to recover it. 00:30:25.438 [2024-11-15 11:53:50.708585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.708615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.708978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.709006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.709374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.709404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.709774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.709803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.710169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.710197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.710409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.710436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.710796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.710826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.711211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.711238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.711620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.711649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.711947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.711977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.712337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.712364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.712629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.712662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.713025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.713054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.713452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.713480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.713866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.713895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.714223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.714254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.714480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.714510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.714882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.714912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.715276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.715304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.715706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.715736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.715949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.715977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.716122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.716148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.716573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.716604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.716978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.717007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.717334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.717362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.717601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.717636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.718000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.718029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.718404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.718434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.718821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.718851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.719213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.719241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.719609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.719639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.719992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.720020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.720411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.720439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.720818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.720847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.721221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.721250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.721582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.721612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.721879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.721907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.722274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.722301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.439 qpair failed and we were unable to recover it. 00:30:25.439 [2024-11-15 11:53:50.722592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.439 [2024-11-15 11:53:50.722624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.722899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.722930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.723320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.723349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.723704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.723735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.724107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.724135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.724517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.724545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.724788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.724817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.725183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.725212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.725454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.725484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.725888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.725920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.726298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.726326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.726699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.726728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.727080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.727108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.727450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.727479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.727818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.727848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.728213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.728242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.728464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.728492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.728753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.728783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.729034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.729063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.729450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.729477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.729866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.729896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.730288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.730317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.730595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.730624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.730886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.730915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.731274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.731303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.731554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.731595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.732002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.732033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.732325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.732354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.732584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.732620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.732996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.733025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.733343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.733371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.733486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.733512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.733896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.733925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.734307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.734335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.734726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.734755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.734850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.734877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.735126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x800e00 is same with the state(6) to be set 00:30:25.440 [2024-11-15 11:53:50.735899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.736027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.736491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.736537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.440 qpair failed and we were unable to recover it. 00:30:25.440 [2024-11-15 11:53:50.736793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.440 [2024-11-15 11:53:50.736823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.737199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.737227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.737579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.737609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.738004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.738039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.738294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.738327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.738730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.738759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.739139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.739167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.739558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.739595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.739837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.739865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.740229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.740257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.740620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.740650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.740949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.740977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.741366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.741394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.741507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.741535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.741941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.741970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.742242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.742598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.742627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.742911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.742939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.743150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.743179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.743559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.743597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.743956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.743984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.744234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.744266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.744503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.744532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.744829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.744859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.745193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.745221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.745608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.745639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.745890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.745919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.746163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.746192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.746440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.746471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.746810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.746839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.747092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.747121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.747575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.747604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.747836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.747864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.441 qpair failed and we were unable to recover it. 00:30:25.441 [2024-11-15 11:53:50.748125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.441 [2024-11-15 11:53:50.748153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.748532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.748560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.748912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.748940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.749321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.749350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.749721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.749750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.750095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.750122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.750502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.750530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.750886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.750917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.751144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.751172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.751534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.751572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.751921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.751949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.752183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.752215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.752664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.752693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.752974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.753002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.753272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.753300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.753543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.753580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.753933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.753961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.754223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.754251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.754472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.754501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.754776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.754805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.755141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.755168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.755391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.755420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.755674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.755704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.755957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.755985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.756384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.756412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.756760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.756790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.757171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.757198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.757576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.757605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.758000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.758028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.758377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.758405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.758641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.758670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.759051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.759078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.759440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.759467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.759824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.759853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.760205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.760233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.760635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.760665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.761036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.761064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.761435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.761462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.761728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.442 [2024-11-15 11:53:50.761764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.442 qpair failed and we were unable to recover it. 00:30:25.442 [2024-11-15 11:53:50.762131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.762159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.762528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.762557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.762680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.762708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.763060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.763088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.763319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.763347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.763672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.763701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.763931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.763959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.764401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.764428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.764763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.764791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.765163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.765191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.765588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.765618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.765941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.765968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.766219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.766248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.766510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.766539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.766929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.766959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.767220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.767248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.767617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.767647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.768018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.768045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.768383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.768410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.768771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.768799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.769203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.769230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.769603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.769632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.769966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.769994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.770394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.770422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.770682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.770710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.771084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.771111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.771491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.771518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.771914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.771945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.772360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.772388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.772725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.772754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.773114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.773143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.773503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.773533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.773866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.773895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.774263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.774292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.774504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.774533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.774903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.774932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.775046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.775077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.443 [2024-11-15 11:53:50.775303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.443 [2024-11-15 11:53:50.775333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.443 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.775599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.775629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.776012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.776040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.776398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.776432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.776782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.776812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.777045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.777072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.777345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.777374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.777582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.777611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.777850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.777877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.778225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.778252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.778643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.778672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.778904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.778932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.779329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.779357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.779753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.779782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.780145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.780174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.780556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.780596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.780991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.781019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.781425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.781453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.781806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.781835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.782219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.782246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.782466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.782494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.782916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.782945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.783330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.783358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.783695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.783724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.784073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.784101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.784499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.784527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.784629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.784657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.784942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.784970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.785351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.785378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.785749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.785778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.786006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.786040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.786309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.786340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.786734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.786764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.787121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.787149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.787384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.787412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.787803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.787832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.788165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.788193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.788419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.788450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.788694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.444 [2024-11-15 11:53:50.788724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.444 qpair failed and we were unable to recover it. 00:30:25.444 [2024-11-15 11:53:50.788932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.788960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.789314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.789342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.789675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.789705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.790074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.790102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.790493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.790521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.790952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.790982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.791188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.791217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.791615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.791644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.791904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.791931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.792156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.792185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.792503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.792532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.792919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.792949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.793316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.793344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.793603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.793634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.793999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.794027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.794373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.794401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.794795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.794824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.795086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.795113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.795483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.795511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.795887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.795916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.796199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.796226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.796596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.796626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.796948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.796976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.797326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.797354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.797594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.797624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.797871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.797899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.798171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.798199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.798423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.798451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.798793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.798822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.799191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.799218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.799598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.799628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.799849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.799878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.800247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.800280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.800673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.800703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.800950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.800978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.801392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.801420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.801797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.801826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.802165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.802192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.802401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.445 [2024-11-15 11:53:50.802429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.445 qpair failed and we were unable to recover it. 00:30:25.445 [2024-11-15 11:53:50.802794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.802823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.803160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.803187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.803577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.803605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.803986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.804014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.804402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.804430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.804638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.804666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.804884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.804912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.805308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.805337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.805436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.805462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.805841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.805870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.806249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.806277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.806508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.806536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.806874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.806903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.807172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.807200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.807586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.807615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.807962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.807989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.808217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.808244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.808634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.808662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.809018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.809046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.809402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.809429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.809772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.809808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.810167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.810196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.810436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.810462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.810805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.810835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.811081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.811110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.811494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.811522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.811877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.811906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.812284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.812313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.812705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.812734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.813131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.813158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.813532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.813560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.813931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.813959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.814391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.814418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.446 qpair failed and we were unable to recover it. 00:30:25.446 [2024-11-15 11:53:50.814759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.446 [2024-11-15 11:53:50.814788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.815019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.815048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.815451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.815837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.815867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.816252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.816280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.816493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.816520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.816782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.816815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.817163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.817191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.817577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.817605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.817991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.818018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.818418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.818447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.818683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.818712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.819065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.819092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.819482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.819510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.819759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.819788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.820162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.820190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.820575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.820604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.820807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.820835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.821216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.821244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.821652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.821680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.822054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.822082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.822465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.822492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.822867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.822896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.823265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.823294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.823523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.823551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.823877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.823905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.824276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.824304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.824539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.824590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.824750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.824784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.825028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.825056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.825441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.825468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.825695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.825725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.825963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.825992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.826377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.826405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.826742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.826771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.826986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.827015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.827362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.827389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.827718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.827747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.828141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.447 [2024-11-15 11:53:50.828168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.447 qpair failed and we were unable to recover it. 00:30:25.447 [2024-11-15 11:53:50.828394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.828422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.828782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.828811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.829156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.829184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.829559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.829596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.829992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.830021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.830426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.830454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.830681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.830711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.831104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.831132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.831505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.831532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.831996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.832025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.832355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.832382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.832778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.832808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.833155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.833184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.833526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.833554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.833908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.833937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.834312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.834342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.834729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.834764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.835009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.835037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.835293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.835322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.835690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.835720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.836098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.836125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.836493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.836521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.836863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.836892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.837269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.837296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.837523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.837550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.837928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.837956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.838338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.838365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.838617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.838646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.838839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.838866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.839114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.839142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.839378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.839407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.839770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.839800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.840173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.840201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.840580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.840609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.840968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.840996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.841385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.841412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.841651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.841681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.842005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.842033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.448 [2024-11-15 11:53:50.842405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.448 [2024-11-15 11:53:50.842434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.448 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.842640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.842669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.843042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.843070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.843439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.843466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.843821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.843850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.844257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.844284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.844633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.844662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.845021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.845049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.845439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.845467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.845824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.845853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.846211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.846240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.846494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.846522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.846778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.846808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.847189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.847218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.847547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.847586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.847949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.847977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.848247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.848275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.848635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.848664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.849030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.849057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.849282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.849316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.849416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.849442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.849775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.849804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.850186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.850214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.850573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.850603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.850830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.850857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.851220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.851247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.851624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.851654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.852010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.852038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.852284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.852316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.852661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.852691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.852932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.852960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.853317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.853345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.853715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.853745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.854113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.854142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.854512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.854540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.854778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.854806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.854979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.855006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.855407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.855435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.855822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-11-15 11:53:50.855852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-11-15 11:53:50.856236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.856264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.856484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.856512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.856871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.856900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.857270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.857298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.857690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.857719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.857962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.857989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.858387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.858415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.858784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.858813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.859186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.859216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.859590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.859619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.859868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.859895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.860162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.860190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.860510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.860538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.860930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.861338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.861366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.861606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.861650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.862039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.862437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.862465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.862835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.863220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.863247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.863488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.863516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.863907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.863938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.864296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.864323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.864693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.864723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.865055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.865085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.865486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.865515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.865856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.865885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.866163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.866191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.866549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.866587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.866949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.866977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.867200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.867228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.867617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.867647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.867924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.867952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.868288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-11-15 11:53:50.868315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-11-15 11:53:50.868696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.868726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.869040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.869068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.869458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.869485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.869861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.869890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.870140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.870171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.870538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.870574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.870963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.870991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.871333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.871360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.871602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.871632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.871790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.871818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.872192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.872219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.872581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.872611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.872856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.872884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.873260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.873287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.873628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.873664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.874029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.874057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.874434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.874462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.874693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.874723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.875074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.875103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.875457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.875485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.875849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.875879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.876149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.876178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.876575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.876605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.877000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.877028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.877249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.877277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.877644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.877672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.877920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.877950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.878314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.878342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.878734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.878763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.878976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.879003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.879436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.879464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.879861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.879891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.880255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.880282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.880515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.880544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.880787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.880816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.881210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.881238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.881642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.881671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.882047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.882074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-11-15 11:53:50.882431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-11-15 11:53:50.882459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.882556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.882590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.883039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.883067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.883403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.883431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.883769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.883799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.884162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.884189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.884580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.884610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.885015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.885043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.885288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.885315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.885717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.885746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.886073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.886101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.886481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.886510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.886886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.886915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.887260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.887289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.887664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.887694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.887932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.887960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.888382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.888410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.888722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.888752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.888847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.888873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.889276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.889304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.889674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.889703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.889918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.889946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.890168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.890196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.890517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.890545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.890904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.890933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.891167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.891195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.891345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.891373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.891721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.891752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.892095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.892123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.892454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.892483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.892858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.892886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.893273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.893300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.893679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.893708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.893942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.893969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.894207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.894236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.894505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.894537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.894906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.894936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.895326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.895355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.895730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-11-15 11:53:50.895759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-11-15 11:53:50.896124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.896151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.896539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.896574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.896989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.897017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.897387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.897415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.897757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.897785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.898151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.898185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.898571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.898600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.898969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.898997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.899359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.899390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.899759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.899789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.900021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.900050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.900410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.900438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.900811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.900840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.901062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.901091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.901504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.901532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.901908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.901937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.902286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.902314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.902691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.902719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.903092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.903120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.903364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.903393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.903791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.903820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.904185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.904212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.904441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.904472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.904578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.904606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x80b0c0 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.905192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.905305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.905826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.905876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.906159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.906201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.906502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.906539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.906968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.907006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.907427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.907462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.907806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.907845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.908078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.908113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.908489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.908536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.908946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.908983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.909374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.909410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.909789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.909826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.910212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.910248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.910609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.910645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-11-15 11:53:50.911067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-11-15 11:53:50.911102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.911492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.911527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.911906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.911943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.912178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.912213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.912488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.912523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.912943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.912979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.913343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.913378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.913745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.913782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.914091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.914128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.914522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.914557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.914814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.914851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-11-15 11:53:50.915235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-11-15 11:53:50.915270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.915532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.915586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.915858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.915896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.916253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.916288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.916601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.916638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.917044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.917079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.917495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.917530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.917698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.917738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.918001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.918040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.918401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.918437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.918798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.918836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.919103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.919139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.919532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.919576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.919829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.919864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.920254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.920288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.920638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.920675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.921090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.921127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.921522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.921558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.921963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.921998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.922385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.922420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.922795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.922832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.923191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.923226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.923666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.923703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.924078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.924121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.924483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.924518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.924826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.924864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.925134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.925172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.925406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.925441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.925710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.925746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.925982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.926017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.926438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.926833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.728 [2024-11-15 11:53:50.926870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.728 qpair failed and we were unable to recover it. 00:30:25.728 [2024-11-15 11:53:50.927015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.927054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.927453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.927488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.927888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.927925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.928323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.928358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.928722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.928758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.929189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.929226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.929603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.929639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.930000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.930034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.930393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.930428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.930809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.930846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.931205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.931240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.931614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.931651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.932046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.932081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.932439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.932474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.932723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.932759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.933042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.933080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.933504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.933539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.933917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.933953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.934311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.934346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.934746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.934784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.935042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.935077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.935366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.935401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.935801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.935837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.936117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.936539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.936583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.936829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.936866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.937293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.937330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.937696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.937734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.938106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.938141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.938514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.938550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.938862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.938899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.939174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.939220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.939589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.939626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.940019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.940053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.940413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.940449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.940866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.940905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.941149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.729 [2024-11-15 11:53:50.941185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.729 qpair failed and we were unable to recover it. 00:30:25.729 [2024-11-15 11:53:50.941582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.941620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.941991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.942026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.942419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.942455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.942811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.942847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.943205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.943240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.943636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.943672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.944068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.944103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.944468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.944503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.944915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.944952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.945309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.945344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.945715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.945753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.946111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.946146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.946504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.946539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.946922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.946958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.947321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.947355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.947729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.947766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.948121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.948156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.948397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.948431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.948831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.948867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.949285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.949321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.949683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.949719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.950098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.950134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.950502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.950538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.950924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.950959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.951315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.951350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.951711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.951748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.951978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.952012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.952268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.952306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.952700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.952736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.953090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.953125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.953476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.953512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.953791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.953829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.954293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.954646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.954683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.955058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.955093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.955499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.955534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.955828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.955863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.956292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.730 [2024-11-15 11:53:50.956327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.730 qpair failed and we were unable to recover it. 00:30:25.730 [2024-11-15 11:53:50.956694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.956732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.957113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.957148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.957535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.957581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.957927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.957963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.958329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.958365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.958746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.958782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.959160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.959195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.959553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.959611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.960002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.960037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.960424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.960459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.960861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.960900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.961258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.961294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.961605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.961642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.962030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.962066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.962472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.962507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.962771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.962806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.963049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.963084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.963328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.963362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.963728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.963764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.964036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.964071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.964434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.964469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.964710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.964747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.965109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.965145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.965390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.965433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.965697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.965734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.966050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.966086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.966476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.966512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.966935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.966972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.967335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.967372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.967631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.967670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.968042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.968078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.968436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.968472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.968913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.968953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.969193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.969229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.969609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.969646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.970011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.970046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.970424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.970460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.970953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.731 [2024-11-15 11:53:50.970991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.731 qpair failed and we were unable to recover it. 00:30:25.731 [2024-11-15 11:53:50.971350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.971386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.971777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.971814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.972198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.972234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.972653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.972690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.972923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.972959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.973331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.973366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.973616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.973652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.973900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.973935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.974318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.974353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.974710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.974745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.975108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.975143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.975510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.975545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.975849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.975886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.976230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.976265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.976613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.976651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.977015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.977050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.977409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.977444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.977816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.977853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.978255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.978290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.978527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.978573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.978990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.979025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.979374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.979410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.979749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.979786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.980143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.980179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.980541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.980589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.980935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.732 [2024-11-15 11:53:50.980978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.732 qpair failed and we were unable to recover it. 00:30:25.732 [2024-11-15 11:53:50.981341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.981376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.981750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.981787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.982165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.982200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.982571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.982615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.982891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.982926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.983291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.983327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.983732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.983768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.984156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.984191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.984459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.984494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.984897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.984933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.985182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.985217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.985590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.985627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.986008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.986045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.986417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.986452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.986824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.986862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.987219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.987255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.987620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.987657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.988057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.988092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.988450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.988485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.988751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.988787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.989222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.989257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.989503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.989541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.989973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.990010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.990375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.990411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.990803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.990841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.991199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.991234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.991529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.991584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.991826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.991862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.992246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.992643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.992680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.993073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.993108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.993476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.993511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.993812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.993849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.994212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.994246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.994610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.994648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.995034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.995070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.995412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.995446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.995884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.733 [2024-11-15 11:53:50.995921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-15 11:53:50.996284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.996319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.996583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.996627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.996817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.996852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.997207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.997241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.997597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.997633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.997908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.997943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.998335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.998369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.998732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.998769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.998998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.999033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.999415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.999451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:50.999816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:50.999852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.000120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.000155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.000538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.000581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.000827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.000862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.001238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.001273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.001642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.001679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.001934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.001972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.002240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.002275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.002591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.002629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.003024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.003059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.003419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.003453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.003820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.003859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.004221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.004258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.004488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.004523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.004787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.004824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.005118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.005156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.005437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.005472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.005864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.005901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.006257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.006294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.006538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.006583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.006947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.006983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.007275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.007312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.007674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.007712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.008095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.008131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.008367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.008402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.008794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.734 [2024-11-15 11:53:51.008831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-15 11:53:51.009083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.009119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.009368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.009407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.009792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.009828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.010224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.010260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.010617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.010653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.011058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.011103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.011529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.011588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.011953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.011990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.012348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.012383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.012819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.012855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.013212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.013247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.013624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.013661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.013922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.013960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.014305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.014340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.014700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.014736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.015097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.015132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.015527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.015592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.015968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.016004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.016361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.016397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.016602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.016640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.016902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.016938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.017063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.017098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.017459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.017495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.017904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.017941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.018362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.018397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.018647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.018683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.019099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.019134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.019496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.019532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.019936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.019973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.020223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.020258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.020649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.020686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.020927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.020962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.021244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.021282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.021660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.021696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.022059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.022095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.022459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.022494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.022905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.022942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-15 11:53:51.023301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.735 [2024-11-15 11:53:51.023335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.023717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.023754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.023988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.024022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.024302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.024337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.024586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.024623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.024752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.024786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.025246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.025605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.025642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.025783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.025829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.026213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.026248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.026682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.026719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.027110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.027145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.027500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.027535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.027827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.027864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.028291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.028327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.028688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.028725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.029100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.029137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.029494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.029532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.029931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.029968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.030327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.030362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.030731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.030769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.031163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.031198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.031442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.031478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.031897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.031934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.032163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.032198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.032579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.032616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.033002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.033037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.033393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.033428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.033696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.033734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.034089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.034126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.034479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.034515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.034791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.034828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.035091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.035126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.035486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.035521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.035922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.035959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.036094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.036129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.036529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.036578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.036936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.036972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.037357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.736 [2024-11-15 11:53:51.037392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.736 qpair failed and we were unable to recover it. 00:30:25.736 [2024-11-15 11:53:51.037741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.037778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.038048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.038084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.038441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.038476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.038732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.038772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.039010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.039045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.039403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.039438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.039822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.039861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.040249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.040285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.040640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.040676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.040925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.040967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.041401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.041436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.041825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.041863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.042235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.042273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.042633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.042670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.043074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.043110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.043346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.043380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.043777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.043813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.044178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.044213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.044576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.044612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.044884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.044920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.045341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.045377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.045710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.045747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.046107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.046142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.046519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.046555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.046952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.046988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.047219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.047255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.047535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.047582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.047957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.047991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.048355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.048391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.048637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.048677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.048976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.049012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.049255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.049290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.737 qpair failed and we were unable to recover it. 00:30:25.737 [2024-11-15 11:53:51.049649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.737 [2024-11-15 11:53:51.049687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.050079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.050115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.050479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.050514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.050952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.050989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.051375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.051413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.051801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.051839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.052256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.052291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.052656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.052693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.053058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.053093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.053503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.053539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.053683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.053722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.054106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.054142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.054433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.054472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.054851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.054889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.055259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.055294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.055687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.055724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.055995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.056031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.056291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.056335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.056695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.056734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.057116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.057152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.057289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.057328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.057717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.057755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.058112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.058148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.058379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.058415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.058822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.058861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.059093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.059128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.059393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.059429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.059845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.059881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.060275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.060309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.060744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.060781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.061141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.061177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.061545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.061590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.061860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.061896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.062280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.062317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.062720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.063121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.063158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.063520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.063557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.063954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.063990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.738 qpair failed and we were unable to recover it. 00:30:25.738 [2024-11-15 11:53:51.064369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.738 [2024-11-15 11:53:51.064406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.064642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.064680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.065072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.065108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.065475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.065512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.065709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.065749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.066007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.066043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.066302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.066338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.066717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.066754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.067144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.067541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.067587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.067918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.067954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.068146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.068182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.068438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.068474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.068727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.068764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.068998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.069034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.069294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.069331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.069741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.069780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.070135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.070171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.070536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.070605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.071004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.071049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.071412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.071449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.071882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.071920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.072286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.072322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.072586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.072624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.073044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.073081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.073494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.073530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.073936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.073973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.074348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.074383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.074831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.074868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.075225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.075261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.075623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.075662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.076060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.076096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.076453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.076490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.076693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.076731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.077149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.077185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.077472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.077511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.077885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.077922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.078195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.078231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.739 [2024-11-15 11:53:51.078657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.739 [2024-11-15 11:53:51.078694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.078935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.078971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.079346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.079382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.079740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.079778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.080016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.080052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.080412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.080448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.080700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.080736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.081130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.081166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.081554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.081611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.081974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.082009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.082373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.082409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.082765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.082803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.083164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.083200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.083559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.083607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.083844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.083880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.084182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.084218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.084617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.084654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.084897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.084933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.085335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.085371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.085640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.085676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.086095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.086132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.086495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.086538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.086974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.087336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.087372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.087788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.087827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.088189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.088225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.088455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.088490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.088888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.088925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.089161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.089198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.089585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.089623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.090000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.090036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.090393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.090429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.090825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.090862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.090992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.091029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.091446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.091481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.091760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.091799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.092174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.092210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.092560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.092605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-11-15 11:53:51.092875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-11-15 11:53:51.092912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.093166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.093619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.093657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.094053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.094088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.094474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.094511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.094934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.094972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.095369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.095406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.095814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.095852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.096096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.096131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.096554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.096599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.096866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.096902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.097330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.097366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.097767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.097804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.098164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.098199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.098586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.098623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.099016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.099051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.099287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.099322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.099708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.099745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.100121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.100156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.100515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.100550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.100959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.101239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.101274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.101621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.101659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.101897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.101938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.102295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.102331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.102713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.102751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.103129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.103163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.103527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.103572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.103990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.104025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.104301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.104336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.104608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.104643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.104913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.104952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.105311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.105347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.105600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.105637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.106026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.106060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.106419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.106455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.106816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.106853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.107229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.107264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.107530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-11-15 11:53:51.107575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-11-15 11:53:51.108007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.108042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.108305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.108341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.108613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.108650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.109068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.109101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.109327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.109362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.109724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.109761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.110121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.110156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.110546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.110603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.110842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.110878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.111154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.111191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.111583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.111620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.112029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.112066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.112431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.112466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.112827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.112864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.113243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.113279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.113510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.113544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.113964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.114000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.114361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.114395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.114683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.114724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.115098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.115133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.115488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.115524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.115952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.115989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.116370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.116405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.116652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.116689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.116949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.116991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.117418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.117453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.117860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.117896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.118290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.118325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.118585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.118622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.119014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.119049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.119409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.119444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.119674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.119710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.119961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.119996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-11-15 11:53:51.120428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-11-15 11:53:51.120462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.120820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.120856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.121091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.121126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.121513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.121548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.121846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.121882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.122148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.122184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.122611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.122648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.123038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.123073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.123314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.123349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.123731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.123768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.123958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.123994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.124392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.124427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.124677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.124713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.125083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.125118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.125480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.125515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.125929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.125964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.126237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.126272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.126518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.126553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.126844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.126883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.127232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.127267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.127619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.127656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.127980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.128015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.128294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.128328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.128708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.128744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.129106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.129141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.129380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.129415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.129687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.129724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.130154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.130189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.130550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.130603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.131050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.131086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.131339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.131377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.131786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.131830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.132222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.132258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.132624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.132661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.133045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.133080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.133449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.133484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.133760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.133796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-11-15 11:53:51.134171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-11-15 11:53:51.134208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.134496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.134531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.134828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.134864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.135288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.135323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.135685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.135723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.136007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.136043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.136435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.136471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.136702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.136738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.137135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.137171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.137532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.137576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.137716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.137751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.138007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.138043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.138406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.138441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.138827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.138863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.139109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.139144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.139506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.139541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.139925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.139962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.140322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.140357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:25.744 [2024-11-15 11:53:51.140714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.140752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:30:25.744 [2024-11-15 11:53:51.141113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.141149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:25.744 [2024-11-15 11:53:51.141535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.141583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:25.744 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.744 [2024-11-15 11:53:51.141960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.141996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.142245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.142283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.142636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.142672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.143043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.143079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.143463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.143498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.143927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.143964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.144347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.144383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.144779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.144816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.145175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.145210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.145490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.145526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-11-15 11:53:51.145930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-11-15 11:53:51.145966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.146397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.146441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.146834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.146873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.147230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.147265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.147537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.147585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.147979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.148016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.148377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.148413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.148806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.148843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.149083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.149120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.149542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.149586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.149971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.150007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.150238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.150273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.150607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.150645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.150990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.151026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.151387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.151422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.151895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.151933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.152293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.152329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.152687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.152724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.153123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.153159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.153515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.153550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.153828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.153866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.154284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.154320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.154678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.154714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.155100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.155135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.155534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.155602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.155962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.155999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.156361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.156395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.156693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.156733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.157122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.157159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.157378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.157413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.157652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.157688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.158093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.158129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.158372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.158406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.158842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.158880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.159269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.159307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.159538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.159584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.159831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-11-15 11:53:51.159866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-11-15 11:53:51.160267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.160304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.160540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.160589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.160961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.160996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.161356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.161393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.161843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.161887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.162313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.162351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.162744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.162864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.163258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.163295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.163639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.163677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.164074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.164111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.164504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.164540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.164976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.165013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.165259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.165295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.165595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.165634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.165877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.165913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.166301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.166336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.166714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.166751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.167012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.167048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.167488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.167523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.167776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.167813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.168190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.168226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.168586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.168624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.169016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.169051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.169282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.169318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.169702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.169739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.170130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.170165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.170400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.170435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.170829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.170867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.171225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.171263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.171492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.171528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.171901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.171938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.172298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.172335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.172710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-11-15 11:53:51.172746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-11-15 11:53:51.172977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.173012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.173363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.173399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.173836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.173875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.174154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.174190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.174464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.174499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.174906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.174943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.175341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.175375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.175754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.175790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.176146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.176184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.176590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.176627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.176988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.177023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.177388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.177431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.177815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.177852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.178213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.178249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.178608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.178645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.179039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.179078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.179437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.179472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.179669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.179708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.180097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.180137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.180375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.180410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.180804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.180840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.181231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.181267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.181639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.181677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.182075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.182111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.182489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.182524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.182921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.182958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.183325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.183361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.183488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.183522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.183786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.183822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.184208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.184243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:25.747 [2024-11-15 11:53:51.184661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.184701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:25.747 [2024-11-15 11:53:51.185064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.185101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:25.747 [2024-11-15 11:53:51.185377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:25.747 [2024-11-15 11:53:51.185416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.185814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.185851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.186214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.186248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.186658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.186694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-11-15 11:53:51.186967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-11-15 11:53:51.187010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.187360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.187395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.187828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.187864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.188225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.188261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.188618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.188654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.189040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.189075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.189309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.189344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.189604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.189641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.190070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.190105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.190491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.190525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.190915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.190950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.191308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.191343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.191709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.191746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.191992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.192026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.192323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.192359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.192774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.192812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.193185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.193220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.193587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.193624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.193927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.193966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.194267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.194302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.194671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.194709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.194990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.195025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.195334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.195369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.195745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.195781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.196048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.196084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.196488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.196523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.196984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.197021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.197424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.197459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.197879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.197916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.198293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.198328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.198692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.198728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.199151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.199186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.199545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.199587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.199948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.199983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.200392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.200427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.200823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.200858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.201236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.201271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.201630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.201666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-11-15 11:53:51.201949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-11-15 11:53:51.201984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.202407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.202442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.202721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.202765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.203162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.203197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.203559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.203605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.204030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.204065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.204340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.204374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.204671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.204707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.205075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.205110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.205285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.205320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.205555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.205602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.205998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.206033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.206310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.206348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.206651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.206688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.207043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.207078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.207369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.207407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.207690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.207730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.208100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.208135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.208383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.208417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.208770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.208806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.209164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.209198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.209585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.209626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.209876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.209912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.210290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.210324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.210685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.210722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.211138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.211173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-11-15 11:53:51.211576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-11-15 11:53:51.211612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.212048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.212085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.212315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.212349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.212765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.212801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.213166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.213202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.213576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.213612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.213980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.214015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.014 [2024-11-15 11:53:51.214377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.014 [2024-11-15 11:53:51.214413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.014 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.214817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.214855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.215211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.215246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.215610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.215647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.215914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.215949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.216204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.216239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.216585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.216622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.217043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.217077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.217280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.217315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.217729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.217772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.218135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.218170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.218408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.218443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.218842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.218880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.219141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.219176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.219602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.219639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.220037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.220072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.220415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.220449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.220819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.220856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.221109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.221148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.221452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.221487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.221930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.221968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.222329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.222364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.222766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.222803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.223106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.223142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.223504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.223539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.223841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.223880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.224134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.224169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.224595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.224632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.224954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.224990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.225263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.225298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.225589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.225626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.225967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.226002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.226369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.226405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.226792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.226829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.227137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.015 [2024-11-15 11:53:51.227173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.015 qpair failed and we were unable to recover it. 00:30:26.015 [2024-11-15 11:53:51.227532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.227579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.227850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.227888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.228275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.228310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.228671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.228708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.229112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.229147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.229509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.229544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.230000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.230037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.230419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.230454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.230821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.230858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.231220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.231256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.231609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.231646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.232009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 Malloc0 00:30:26.016 [2024-11-15 11:53:51.232044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.232407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.232442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.232821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.232859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b9 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.016 0 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.233245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:26.016 [2024-11-15 11:53:51.233280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.233576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.016 [2024-11-15 11:53:51.233612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.016 [2024-11-15 11:53:51.233889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.233924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.234183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.234218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.234618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.234655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.235046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.235081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.235444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.235479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.235735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.235772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.236119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.236154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.236518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.236553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.236925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.236960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.237302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.237337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.237627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.237664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.237802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.237836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.238247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.238282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.238590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.238627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.239052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.239086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.239401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.016 [2024-11-15 11:53:51.239470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.239514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.239978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.240020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.240402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.240438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.240805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.240843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.016 [2024-11-15 11:53:51.241204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.016 [2024-11-15 11:53:51.241239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.016 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.241488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.241523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.241827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.241863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.242241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.242275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.242680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.242718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.243155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.243432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.243467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.243900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.243936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.244303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.244339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.244599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.244638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.245016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.245052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.245440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.245475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.245844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.245881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.246134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.246169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.246582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.246619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.246905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.246940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.247184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.247219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.247610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.247654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.247912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.247947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.248220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.248256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.017 [2024-11-15 11:53:51.248643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.248681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.017 [2024-11-15 11:53:51.249062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.249098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.017 [2024-11-15 11:53:51.249360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.017 [2024-11-15 11:53:51.249398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.249823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.249860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.250233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.250268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.250522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.250557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.250965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.251001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.251363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.251398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.251788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.251824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.252219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.252255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.252646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.252704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.253075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.253114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.253481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.253516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.253967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.254004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.254280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.254314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.254738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.254774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.017 [2024-11-15 11:53:51.255138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.017 [2024-11-15 11:53:51.255174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.017 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.255539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.255603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.255901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.255939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.256377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.256412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.256738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.256775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.257040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.257075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.257472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.257507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.257889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.257927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.258318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.258353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.258745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.258783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.259144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.259179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.259545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.259595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.259987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.260022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.018 [2024-11-15 11:53:51.260424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.260459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.018 [2024-11-15 11:53:51.260876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.260914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.018 [2024-11-15 11:53:51.261324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.018 [2024-11-15 11:53:51.261360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.261731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.261766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.262157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.262200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.262577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.262618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.263020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.263055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.263294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.263329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.263715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.263752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.264117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.264152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.264420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.264454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.264873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.264910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.265306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.265341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.265742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.265777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.266199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.266234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.266599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.266636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.266925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.266963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.267328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.267364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.267752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.267789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.268096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.268132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.268491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.268526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.268951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.268987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.269348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.269382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.269785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.018 [2024-11-15 11:53:51.269823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.018 qpair failed and we were unable to recover it. 00:30:26.018 [2024-11-15 11:53:51.270098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.270134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.270433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.270468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.270600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.270637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.270888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.270923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.271328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.271363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.271638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.271674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.272050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.272085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.019 [2024-11-15 11:53:51.272478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.272514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.019 [2024-11-15 11:53:51.272989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.273026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.019 [2024-11-15 11:53:51.273384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.019 [2024-11-15 11:53:51.273420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.273702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.273742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.274128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.274163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.274525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.274560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.274964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.274999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.275361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.275396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.275817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.275854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.276210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.276245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.276607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.276643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.276933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.276967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.277337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.277373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.277768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.277805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.278199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.278234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.278626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.278661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.279056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.279091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.279440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.019 [2024-11-15 11:53:51.279475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa108000b90 with addr=10.0.0.2, port=4420 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 [2024-11-15 11:53:51.279994] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:26.019 [2024-11-15 11:53:51.281117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.019 [2024-11-15 11:53:51.281266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.019 [2024-11-15 11:53:51.281323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.019 [2024-11-15 11:53:51.281360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.019 [2024-11-15 11:53:51.281396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.019 [2024-11-15 11:53:51.281477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.019 qpair failed and we were unable to recover it. 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:26.019 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.020 [2024-11-15 11:53:51.290640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.290774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.290824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.290883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.290921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.290995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:26.020 11:53:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1264260 00:30:26.020 [2024-11-15 11:53:51.300732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.300834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.300872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.300897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.300923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.300973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.310750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.310853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.310885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.310903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.310919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.310958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.320701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.320782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.320807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.320819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.320832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.320861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.330679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.330749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.330772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.330784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.330804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.330832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.340681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.340753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.340776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.340788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.340799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.340829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.350729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.350805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.350827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.350839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.350852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.350879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.360839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.360923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.360946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.360957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.360968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.360996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.370833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.370901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.370922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.370934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.370947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.370975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.380833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.380900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.380922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.380934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.380945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.380973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.390762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.390844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.390865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.390877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.390888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.390916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.400917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.400986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.401007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.401019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.401035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.401064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.020 qpair failed and we were unable to recover it. 00:30:26.020 [2024-11-15 11:53:51.410897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.020 [2024-11-15 11:53:51.410996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.020 [2024-11-15 11:53:51.411017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.020 [2024-11-15 11:53:51.411029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.020 [2024-11-15 11:53:51.411039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.020 [2024-11-15 11:53:51.411067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.420924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.421010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.421032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.421044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.421056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.421085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.430960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.431038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.431061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.431073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.431085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.431114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.441026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.441115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.441136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.441148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.441160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.441189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.451017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.451091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.451113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.451124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.451135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.451163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.461040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.461110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.461130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.461149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.461163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.461191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.471081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.471164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.471186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.471198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.471209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.471237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.481319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.481407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.481428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.481439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.481450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.481478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.491195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.491290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.491331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.491344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.491356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.491391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.021 [2024-11-15 11:53:51.501217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.021 [2024-11-15 11:53:51.501290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.021 [2024-11-15 11:53:51.501329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.021 [2024-11-15 11:53:51.501343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.021 [2024-11-15 11:53:51.501355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.021 [2024-11-15 11:53:51.501397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.021 qpair failed and we were unable to recover it. 00:30:26.283 [2024-11-15 11:53:51.511238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.283 [2024-11-15 11:53:51.511341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.283 [2024-11-15 11:53:51.511379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.283 [2024-11-15 11:53:51.511394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.283 [2024-11-15 11:53:51.511407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.283 [2024-11-15 11:53:51.511442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.521266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.521355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.521380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.521392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.521405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.521436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.531141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.531205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.531227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.531238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.531251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.531280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.541305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.541374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.541397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.541409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.541420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.541449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.551349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.551462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.551484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.551496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.551507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.551536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.561394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.561516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.561537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.561549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.561567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.561598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.571265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.571339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.571360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.571372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.571384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.571411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.581410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.581480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.581500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.581512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.581523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.581550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.591475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.591554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.591585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.591598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.591609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.591636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.601505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.601586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.601607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.601618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.601631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.601660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.611479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.611543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.611567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.611579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.611590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.611618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.621510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.621585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.621613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.621625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.621638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.621664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.631620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.631695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.631716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.631728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.631739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.631773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.284 qpair failed and we were unable to recover it. 00:30:26.284 [2024-11-15 11:53:51.641627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.284 [2024-11-15 11:53:51.641696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.284 [2024-11-15 11:53:51.641717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.284 [2024-11-15 11:53:51.641729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.284 [2024-11-15 11:53:51.641740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.284 [2024-11-15 11:53:51.641767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.651605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.651666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.651688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.651700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.651711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.651738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.661637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.661708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.661729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.661741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.661753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.661782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.671658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.671731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.671758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.671771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.671782] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.671808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.681699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.681781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.681802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.681814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.681825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.681853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.691727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.691803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.691824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.691836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.691848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.691876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.701770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.701839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.701860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.701872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.701884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.701910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.711818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.711889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.711909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.711921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.711934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.711962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.721869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.721948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.721975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.721987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.721999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.722028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.731921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.732027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.732048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.732060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.732072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.732101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.741818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.741891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.741912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.741924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.741938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.741965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.751959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.752032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.752053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.752065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.752077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.752106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.762025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.762103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.762123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.762135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.762156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.762183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.285 [2024-11-15 11:53:51.772019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.285 [2024-11-15 11:53:51.772099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.285 [2024-11-15 11:53:51.772120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.285 [2024-11-15 11:53:51.772131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.285 [2024-11-15 11:53:51.772144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.285 [2024-11-15 11:53:51.772172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.285 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.781913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.781983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.782009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.782020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.782031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.782057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.792097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.792181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.792202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.792214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.792226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.792253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.802024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.802099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.802121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.802134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.802145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.802173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.812136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.812228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.812249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.812261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.812273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.812301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.822160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.822229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.822250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.822262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.822274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.822303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.832188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.832266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.832287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.832299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.832312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.832339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.842214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.842296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.842319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.842332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.842343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.842371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.852272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.852355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.852399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.852413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.852425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.852461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.862236] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.862308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.862333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.862345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.862357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.862388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.872319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.872391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.872413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.872425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.872437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.872467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.882399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.882472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.548 [2024-11-15 11:53:51.882494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.548 [2024-11-15 11:53:51.882506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.548 [2024-11-15 11:53:51.882518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.548 [2024-11-15 11:53:51.882545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.548 qpair failed and we were unable to recover it. 00:30:26.548 [2024-11-15 11:53:51.892410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.548 [2024-11-15 11:53:51.892478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.892501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.892520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.892533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.892560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.902420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.902517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.902539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.902550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.902570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.902600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.912450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.912580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.912603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.912614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.912626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.912655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.922549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.922632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.922653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.922665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.922679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.922707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.932488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.932557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.932586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.932598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.932611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.932640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.942523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.942592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.942614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.942626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.942637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.942665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.952525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.952607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.952628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.952640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.952653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.952681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.962579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.962657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.962679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.962690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.962701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.962729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.972628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.972695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.972715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.972727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.972738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.972767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.982627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.982695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.982724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.982736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.982748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.982776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:51.992691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:51.992776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:51.992798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:51.992810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:51.992821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:51.992849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:52.002747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:52.002823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:52.002845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:52.002856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:52.002868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:52.002896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:52.012768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:52.012841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:52.012863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.549 [2024-11-15 11:53:52.012875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.549 [2024-11-15 11:53:52.012887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.549 [2024-11-15 11:53:52.012915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.549 qpair failed and we were unable to recover it. 00:30:26.549 [2024-11-15 11:53:52.022671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.549 [2024-11-15 11:53:52.022740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.549 [2024-11-15 11:53:52.022761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.550 [2024-11-15 11:53:52.022781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.550 [2024-11-15 11:53:52.022795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.550 [2024-11-15 11:53:52.022823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.550 qpair failed and we were unable to recover it. 00:30:26.550 [2024-11-15 11:53:52.032827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.550 [2024-11-15 11:53:52.032902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.550 [2024-11-15 11:53:52.032924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.550 [2024-11-15 11:53:52.032936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.550 [2024-11-15 11:53:52.032947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.550 [2024-11-15 11:53:52.032976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.550 qpair failed and we were unable to recover it. 00:30:26.812 [2024-11-15 11:53:52.042783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.042864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.042886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.042898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.042911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.042939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.052886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.052959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.052980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.052993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.053004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.053033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.062814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.062924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.062945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.062957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.062970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.063011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.072947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.073024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.073047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.073059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.073070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.073099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.083023] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.083100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.083123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.083134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.083144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.083172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.093041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.093118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.093140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.093153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.093164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.093193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.103044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.103115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.103137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.103150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.103161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.103190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.113079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.113161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.113186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.113198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.113210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.113236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.123110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.123193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.123214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.123225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.123238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.123267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.133001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.133074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.133095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.133107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.133118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.133146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.143150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.143220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.143247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.143258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.143269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.143298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.153200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.153276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.153303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.153315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.153326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.153354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.813 [2024-11-15 11:53:52.163221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.813 [2024-11-15 11:53:52.163298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.813 [2024-11-15 11:53:52.163321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.813 [2024-11-15 11:53:52.163333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.813 [2024-11-15 11:53:52.163346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.813 [2024-11-15 11:53:52.163373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.813 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.173201] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.173276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.173297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.173309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.173321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.173349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.183262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.183339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.183360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.183371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.183382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.183410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.193310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.193400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.193421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.193433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.193444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.193477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.203345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.203425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.203447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.203458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.203470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.203498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.213358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.213432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.213453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.213465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.213478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.213505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.223391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.223465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.223487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.223499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.223510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.223536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.233422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.233497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.233519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.233530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.233541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.233575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.243460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.243579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.243601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.243613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.243626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.243655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.253464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.253539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.253568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.253581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.253592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.253617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.263538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.263615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.263645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.263657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.263666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.263695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.273541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.273625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.273653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.273664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.273674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.273701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.283583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.283662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.283697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.283709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.283719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.283747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.293582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.814 [2024-11-15 11:53:52.293652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.814 [2024-11-15 11:53:52.293680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.814 [2024-11-15 11:53:52.293692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.814 [2024-11-15 11:53:52.293703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.814 [2024-11-15 11:53:52.293730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.814 qpair failed and we were unable to recover it. 00:30:26.814 [2024-11-15 11:53:52.303621] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.815 [2024-11-15 11:53:52.303689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.815 [2024-11-15 11:53:52.303715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.815 [2024-11-15 11:53:52.303728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.815 [2024-11-15 11:53:52.303739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:26.815 [2024-11-15 11:53:52.303766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:26.815 qpair failed and we were unable to recover it. 00:30:27.076 [2024-11-15 11:53:52.313649] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.076 [2024-11-15 11:53:52.313728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.076 [2024-11-15 11:53:52.313754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.076 [2024-11-15 11:53:52.313766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.313777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.313803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.323709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.323823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.323848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.323860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.323879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.323907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.333716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.333816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.333840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.333853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.333864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.333892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.343755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.343840] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.343866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.343879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.343890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.343917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.353802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.353881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.353909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.353921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.353933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.353960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.363818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.363905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.363931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.363943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.363954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.363980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.373843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.373952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.373974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.373986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.373998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.374026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.383844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.383916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.383944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.383956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.383968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.383995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.393919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.393997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.394026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.394038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.394049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.394076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.403979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.404064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.404092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.404104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.404115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.404142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.413949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.414041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.414071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.414083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.414094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.414123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.423862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.423940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.423967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.423979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.423990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.424016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.434022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.434095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.077 [2024-11-15 11:53:52.434125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.077 [2024-11-15 11:53:52.434137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.077 [2024-11-15 11:53:52.434148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.077 [2024-11-15 11:53:52.434175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.077 qpair failed and we were unable to recover it. 00:30:27.077 [2024-11-15 11:53:52.444045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.077 [2024-11-15 11:53:52.444129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.444156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.444168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.444179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.444206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.454099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.454172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.454200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.454220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.454233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.454260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.464095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.464169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.464208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.464221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.464232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.464266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.474188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.474310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.474351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.474364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.474376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.474410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.484208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.484312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.484339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.484351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.484363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.484392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.494095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.494169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.494198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.494210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.494221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.494249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.504243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.504323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.504386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.504400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.504411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.504454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.514284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.514369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.514396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.514409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.514420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.514449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.524214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.524321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.524352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.524366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.524378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.524407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.534408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.534487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.534515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.534528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.534538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.534573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.544350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.544426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.544457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.544469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.544480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.078 [2024-11-15 11:53:52.544507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.078 qpair failed and we were unable to recover it. 00:30:27.078 [2024-11-15 11:53:52.554432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.078 [2024-11-15 11:53:52.554506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.078 [2024-11-15 11:53:52.554536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.078 [2024-11-15 11:53:52.554548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.078 [2024-11-15 11:53:52.554559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.079 [2024-11-15 11:53:52.554594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.079 qpair failed and we were unable to recover it. 00:30:27.079 [2024-11-15 11:53:52.564447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.079 [2024-11-15 11:53:52.564528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.079 [2024-11-15 11:53:52.564556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.079 [2024-11-15 11:53:52.564573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.079 [2024-11-15 11:53:52.564584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.079 [2024-11-15 11:53:52.564612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.079 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.574341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.574411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.574439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.574451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.574463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.574490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.584495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.584605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.584627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.584648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.584659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.584687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.594411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.594525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.594549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.594566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.594579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.594606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.604571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.604652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.604681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.604693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.604703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.604731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.614447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.614522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.614549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.614566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.614576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.614602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.624492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.624559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.624596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.624608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.624619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.624653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.634657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.634728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.634758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.634770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.634781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.634808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.644724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.644807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.644833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.644845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.644856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.644882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.654710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.654785] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.654812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.654824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.654835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.654862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.664727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.664829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.664852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.664864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.664876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.664901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.674778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.674862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.674889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.674902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.341 [2024-11-15 11:53:52.674913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.341 [2024-11-15 11:53:52.674941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.341 qpair failed and we were unable to recover it. 00:30:27.341 [2024-11-15 11:53:52.684812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.341 [2024-11-15 11:53:52.684899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.341 [2024-11-15 11:53:52.684925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.341 [2024-11-15 11:53:52.684937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.684948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.684976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.694793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.694870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.694895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.694908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.694919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.694945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.704830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.704906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.704934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.704947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.704958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.704985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.714860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.714933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.714966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.714978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.714989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.715016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.724942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.725020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.725047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.725059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.725070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.725096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.734917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.734992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.735018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.735030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.735041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.735068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.744967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.745040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.745065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.745077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.745088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.745115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.755027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.755146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.755168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.755180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.755198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.755225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.765018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.765097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.765126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.765139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.765150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.765178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.775069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.775181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.775208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.775220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.775231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.775258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.785075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.785138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.785162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.785175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.785186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.785212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.795074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.795153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.795179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.795191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.795202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.795227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.805153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.805257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.805282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.805294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.342 [2024-11-15 11:53:52.805305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.342 [2024-11-15 11:53:52.805333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.342 qpair failed and we were unable to recover it. 00:30:27.342 [2024-11-15 11:53:52.815117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.342 [2024-11-15 11:53:52.815190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.342 [2024-11-15 11:53:52.815213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.342 [2024-11-15 11:53:52.815224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.343 [2024-11-15 11:53:52.815235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.343 [2024-11-15 11:53:52.815261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.343 qpair failed and we were unable to recover it. 00:30:27.343 [2024-11-15 11:53:52.825147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.343 [2024-11-15 11:53:52.825204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.343 [2024-11-15 11:53:52.825228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.343 [2024-11-15 11:53:52.825241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.343 [2024-11-15 11:53:52.825252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.343 [2024-11-15 11:53:52.825277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.343 qpair failed and we were unable to recover it. 00:30:27.343 [2024-11-15 11:53:52.835195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.343 [2024-11-15 11:53:52.835268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.343 [2024-11-15 11:53:52.835301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.343 [2024-11-15 11:53:52.835314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.343 [2024-11-15 11:53:52.835326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.343 [2024-11-15 11:53:52.835357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.343 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.845247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.845320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.845360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.845374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.845386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.845416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.855276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.855358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.855382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.855394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.855405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.855432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.865241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.865301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.865325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.865338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.865349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.865375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.875314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.875388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.875410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.875422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.875432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.875458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.885366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.885436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.885458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.885471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.885487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.885511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.895343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.895408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.895430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.895442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.895453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.895477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.905221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.905276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.905301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.905313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.905324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.905348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.915416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.915499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.915517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.915529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.915540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.915570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.925456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.925528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.925547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.925558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.925574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.925598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.935323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.935381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.935402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.935414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.935425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.935448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.945505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.945621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.945638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.605 [2024-11-15 11:53:52.945649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.605 [2024-11-15 11:53:52.945660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.605 [2024-11-15 11:53:52.945683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.605 qpair failed and we were unable to recover it. 00:30:27.605 [2024-11-15 11:53:52.955556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.605 [2024-11-15 11:53:52.955645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.605 [2024-11-15 11:53:52.955662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:52.955673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:52.955684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:52.955707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:52.965539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:52.965602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:52.965624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:52.965636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:52.965647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:52.965670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:52.975531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:52.975587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:52.975613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:52.975625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:52.975636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:52.975659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:52.985538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:52.985595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:52.985614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:52.985626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:52.985637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:52.985660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:52.995612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:52.995677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:52.995697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:52.995709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:52.995720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:52.995743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.005671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.005730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.005752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.005764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.005775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.005798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.015639] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.015739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.015756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.015771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.015783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.015806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.025648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.025722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.025739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.025751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.025761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.025784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.035749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.035812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.035834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.035845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.035856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.035879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.045786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.045852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.045871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.045882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.045893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.045916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.055741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.055793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.055814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.055826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.055836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.055859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.065751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.065807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.065828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.065839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.065850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.065872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.075843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.075906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.075927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.075938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.606 [2024-11-15 11:53:53.075948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.606 [2024-11-15 11:53:53.075971] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.606 qpair failed and we were unable to recover it. 00:30:27.606 [2024-11-15 11:53:53.085871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.606 [2024-11-15 11:53:53.085932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.606 [2024-11-15 11:53:53.085953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.606 [2024-11-15 11:53:53.085965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.607 [2024-11-15 11:53:53.085976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.607 [2024-11-15 11:53:53.085998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.607 qpair failed and we were unable to recover it. 00:30:27.607 [2024-11-15 11:53:53.095852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.607 [2024-11-15 11:53:53.095932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.607 [2024-11-15 11:53:53.095948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.607 [2024-11-15 11:53:53.095960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.607 [2024-11-15 11:53:53.095971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.607 [2024-11-15 11:53:53.095994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.607 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.105891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.105947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.105967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.105980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.105991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.106014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.116012] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.116075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.116095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.116106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.116117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.116140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.126003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.126062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.126083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.126095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.126106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.126129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.135987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.136044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.136065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.136077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.136088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.136110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.145995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.146051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.146073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.146088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.146099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.146122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.156028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.156089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.156110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.156121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.156132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.156155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.165953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.166006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.166026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.166039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.166050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.166073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.869 [2024-11-15 11:53:53.176047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.869 [2024-11-15 11:53:53.176101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.869 [2024-11-15 11:53:53.176121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.869 [2024-11-15 11:53:53.176134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.869 [2024-11-15 11:53:53.176145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.869 [2024-11-15 11:53:53.176168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.869 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.186080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.186132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.186154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.186166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.186177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.186204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.196175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.196241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.196260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.196272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.196283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.196307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.206173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.206232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.206260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.206273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.206285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.206313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.216208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.216275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.216305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.216319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.216331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.216359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.226209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.226264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.226293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.226305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.226317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.226345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.236298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.236366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.236396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.236410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.236421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.236449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.246278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.246334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.246357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.246370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.246381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.246406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.256285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.256343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.256363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.256375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.256386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.256409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.266326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.266379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.266401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.266413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.266423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.266447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.276443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.276521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.276544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.276558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.276576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.276601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.286365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.286425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.286445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.286458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.286469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.286493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.296388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.296443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.296466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.296478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.296489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.296512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.306417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.870 [2024-11-15 11:53:53.306472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.870 [2024-11-15 11:53:53.306494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.870 [2024-11-15 11:53:53.306506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.870 [2024-11-15 11:53:53.306517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.870 [2024-11-15 11:53:53.306541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.870 qpair failed and we were unable to recover it. 00:30:27.870 [2024-11-15 11:53:53.316490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.871 [2024-11-15 11:53:53.316552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.871 [2024-11-15 11:53:53.316577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.871 [2024-11-15 11:53:53.316590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.871 [2024-11-15 11:53:53.316605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.871 [2024-11-15 11:53:53.316630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-11-15 11:53:53.326481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.871 [2024-11-15 11:53:53.326536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.871 [2024-11-15 11:53:53.326559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.871 [2024-11-15 11:53:53.326576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.871 [2024-11-15 11:53:53.326588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.871 [2024-11-15 11:53:53.326611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-11-15 11:53:53.336483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.871 [2024-11-15 11:53:53.336536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.871 [2024-11-15 11:53:53.336559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.871 [2024-11-15 11:53:53.336577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.871 [2024-11-15 11:53:53.336588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.871 [2024-11-15 11:53:53.336612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-11-15 11:53:53.346543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.871 [2024-11-15 11:53:53.346661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.871 [2024-11-15 11:53:53.346677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.871 [2024-11-15 11:53:53.346688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.871 [2024-11-15 11:53:53.346699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.871 [2024-11-15 11:53:53.346723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.871 qpair failed and we were unable to recover it. 00:30:27.871 [2024-11-15 11:53:53.356616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.871 [2024-11-15 11:53:53.356676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.871 [2024-11-15 11:53:53.356698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.871 [2024-11-15 11:53:53.356709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.871 [2024-11-15 11:53:53.356720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:27.871 [2024-11-15 11:53:53.356743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:27.871 qpair failed and we were unable to recover it. 00:30:28.133 [2024-11-15 11:53:53.366597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.133 [2024-11-15 11:53:53.366656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.133 [2024-11-15 11:53:53.366677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.133 [2024-11-15 11:53:53.366689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.133 [2024-11-15 11:53:53.366700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.133 [2024-11-15 11:53:53.366723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.133 qpair failed and we were unable to recover it. 00:30:28.133 [2024-11-15 11:53:53.376592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.133 [2024-11-15 11:53:53.376647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.133 [2024-11-15 11:53:53.376668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.133 [2024-11-15 11:53:53.376680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.133 [2024-11-15 11:53:53.376691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.376715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.386680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.386740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.386760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.386772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.386783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.386806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.396732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.396803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.396821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.396833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.396844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.396868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.406736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.406792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.406816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.406828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.406839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.406862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.416718] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.416780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.416799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.416811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.416822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.416844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.426751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.426804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.426826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.426838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.426849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.426873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.436708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.436770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.436791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.436802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.436813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.436836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.446794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.446853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.446874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.446886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.446901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.446924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.456829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.456889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.456909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.456920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.456931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.456954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.466841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.466891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.466910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.466922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.466934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.466956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.476922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.476981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.477001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.477012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.477023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.477046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.487025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.487093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.487113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.487125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.487136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.487158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.134 qpair failed and we were unable to recover it. 00:30:28.134 [2024-11-15 11:53:53.496990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.134 [2024-11-15 11:53:53.497051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.134 [2024-11-15 11:53:53.497071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.134 [2024-11-15 11:53:53.497082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.134 [2024-11-15 11:53:53.497093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.134 [2024-11-15 11:53:53.497116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.506992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.507047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.507067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.507078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.507089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.507112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.517001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.517052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.517071] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.517083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.517093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.517116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.527037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.527089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.527108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.527120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.527130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.527154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.537009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.537064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.537089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.537101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.537112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.537134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.547085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.547136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.547159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.547171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.547182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.547206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.557111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.557162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.557182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.557194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.557205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.557228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.567168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.567231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.567251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.567263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.567274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.567297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.577158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.577206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.577226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.577244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.577255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.577278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.587184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.587248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.587278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.587293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.587305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.587333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.597228] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.597289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.597319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.597333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.597344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.597375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.607258] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.607320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.607349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.607363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.607375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.607404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.617270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.617322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.617343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.617355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.617367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.617392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.135 [2024-11-15 11:53:53.627349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.135 [2024-11-15 11:53:53.627434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.135 [2024-11-15 11:53:53.627451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.135 [2024-11-15 11:53:53.627462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.135 [2024-11-15 11:53:53.627473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.135 [2024-11-15 11:53:53.627496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.135 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.637286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.637339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.637360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.637373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.398 [2024-11-15 11:53:53.637385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.398 [2024-11-15 11:53:53.637410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.398 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.647370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.647440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.647459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.647471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.398 [2024-11-15 11:53:53.647483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.398 [2024-11-15 11:53:53.647506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.398 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.657335] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.657388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.657410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.657422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.398 [2024-11-15 11:53:53.657433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.398 [2024-11-15 11:53:53.657456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.398 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.667411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.667465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.667487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.667499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.398 [2024-11-15 11:53:53.667510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.398 [2024-11-15 11:53:53.667533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.398 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.677341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.677402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.677420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.677432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.398 [2024-11-15 11:53:53.677443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.398 [2024-11-15 11:53:53.677466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.398 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.687481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.687536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.687557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.687574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.398 [2024-11-15 11:53:53.687586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.398 [2024-11-15 11:53:53.687609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.398 qpair failed and we were unable to recover it. 00:30:28.398 [2024-11-15 11:53:53.697486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.398 [2024-11-15 11:53:53.697541] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.398 [2024-11-15 11:53:53.697567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.398 [2024-11-15 11:53:53.697580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.697591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.697615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.707507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.707558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.707584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.707600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.707612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.707635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.717535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.717625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.717641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.717652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.717663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.717686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.727582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.727642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.727662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.727673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.727684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.727707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.737599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.737657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.737677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.737689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.737700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.737723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.747605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.747654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.747674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.747686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.747698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.747725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.757660] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.757724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.757744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.757755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.757766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.757789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.767697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.767753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.767772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.767783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.767794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.767817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.777698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.777754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.777776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.777790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.777801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.777824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.787742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.787793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.787811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.787822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.787834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.787856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.797767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.797823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.797844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.797855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.797866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.797889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.807803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.807859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.807881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.807893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.807904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.807926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.817823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.817872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.817894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.817906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.817917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.399 [2024-11-15 11:53:53.817940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.399 qpair failed and we were unable to recover it. 00:30:28.399 [2024-11-15 11:53:53.827821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.399 [2024-11-15 11:53:53.827895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.399 [2024-11-15 11:53:53.827911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.399 [2024-11-15 11:53:53.827922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.399 [2024-11-15 11:53:53.827934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.827956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.400 [2024-11-15 11:53:53.837889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.400 [2024-11-15 11:53:53.837943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.400 [2024-11-15 11:53:53.837968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.400 [2024-11-15 11:53:53.837980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.400 [2024-11-15 11:53:53.837992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.838014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.400 [2024-11-15 11:53:53.847880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.400 [2024-11-15 11:53:53.847963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.400 [2024-11-15 11:53:53.847981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.400 [2024-11-15 11:53:53.847993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.400 [2024-11-15 11:53:53.848004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.848027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.400 [2024-11-15 11:53:53.857913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.400 [2024-11-15 11:53:53.857971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.400 [2024-11-15 11:53:53.857990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.400 [2024-11-15 11:53:53.858001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.400 [2024-11-15 11:53:53.858012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.858035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.400 [2024-11-15 11:53:53.867933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.400 [2024-11-15 11:53:53.868014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.400 [2024-11-15 11:53:53.868030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.400 [2024-11-15 11:53:53.868041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.400 [2024-11-15 11:53:53.868051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.868075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.400 [2024-11-15 11:53:53.877991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.400 [2024-11-15 11:53:53.878046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.400 [2024-11-15 11:53:53.878067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.400 [2024-11-15 11:53:53.878078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.400 [2024-11-15 11:53:53.878094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.878116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.400 [2024-11-15 11:53:53.888027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.400 [2024-11-15 11:53:53.888116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.400 [2024-11-15 11:53:53.888132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.400 [2024-11-15 11:53:53.888143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.400 [2024-11-15 11:53:53.888154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.400 [2024-11-15 11:53:53.888177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.400 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.898033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.662 [2024-11-15 11:53:53.898085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.662 [2024-11-15 11:53:53.898107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.662 [2024-11-15 11:53:53.898119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.662 [2024-11-15 11:53:53.898130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.662 [2024-11-15 11:53:53.898153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.662 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.908064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.662 [2024-11-15 11:53:53.908111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.662 [2024-11-15 11:53:53.908131] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.662 [2024-11-15 11:53:53.908143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.662 [2024-11-15 11:53:53.908155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.662 [2024-11-15 11:53:53.908177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.662 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.918099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.662 [2024-11-15 11:53:53.918155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.662 [2024-11-15 11:53:53.918177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.662 [2024-11-15 11:53:53.918189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.662 [2024-11-15 11:53:53.918200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.662 [2024-11-15 11:53:53.918223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.662 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.928125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.662 [2024-11-15 11:53:53.928182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.662 [2024-11-15 11:53:53.928203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.662 [2024-11-15 11:53:53.928215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.662 [2024-11-15 11:53:53.928226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.662 [2024-11-15 11:53:53.928250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.662 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.938134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.662 [2024-11-15 11:53:53.938187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.662 [2024-11-15 11:53:53.938208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.662 [2024-11-15 11:53:53.938220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.662 [2024-11-15 11:53:53.938231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.662 [2024-11-15 11:53:53.938255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.662 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.948147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.662 [2024-11-15 11:53:53.948200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.662 [2024-11-15 11:53:53.948221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.662 [2024-11-15 11:53:53.948233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.662 [2024-11-15 11:53:53.948244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.662 [2024-11-15 11:53:53.948266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.662 qpair failed and we were unable to recover it. 00:30:28.662 [2024-11-15 11:53:53.958214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:53.958267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:53.958288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:53.958299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:53.958310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:53.958334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:53.968245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:53.968303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:53.968328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:53.968341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:53.968352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:53.968376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:53.978245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:53.978293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:53.978313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:53.978325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:53.978336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:53.978359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:53.988257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:53.988307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:53.988329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:53.988340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:53.988352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:53.988375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:53.998272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:53.998324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:53.998345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:53.998357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:53.998368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:53.998391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.008338] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.008393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.008414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.008426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.008441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.008465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.018235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.018282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.018302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.018313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.018323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.018347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.028391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.028444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.028464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.028476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.028488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.028512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.038433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.038485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.038505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.038517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.038528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.038551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.048464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.048521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.048543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.048554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.048572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.048596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.058346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.058398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.058421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.058432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.058443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.058468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.068503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.068554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.068580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.068593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.068603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.068627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.078498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.663 [2024-11-15 11:53:54.078580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.663 [2024-11-15 11:53:54.078596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.663 [2024-11-15 11:53:54.078608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.663 [2024-11-15 11:53:54.078619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.663 [2024-11-15 11:53:54.078642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.663 qpair failed and we were unable to recover it. 00:30:28.663 [2024-11-15 11:53:54.088559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.088612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.088633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.088645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.088656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.088679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.664 [2024-11-15 11:53:54.098455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.098502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.098527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.098540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.098551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.098580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.664 [2024-11-15 11:53:54.108581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.108629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.108649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.108661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.108672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.108696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.664 [2024-11-15 11:53:54.118656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.118709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.118730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.118742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.118753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.118776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.664 [2024-11-15 11:53:54.128680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.128734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.128756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.128767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.128779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.128801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.664 [2024-11-15 11:53:54.138686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.138733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.138754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.138770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.138781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.138805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.664 [2024-11-15 11:53:54.148720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.664 [2024-11-15 11:53:54.148776] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.664 [2024-11-15 11:53:54.148796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.664 [2024-11-15 11:53:54.148808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.664 [2024-11-15 11:53:54.148819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.664 [2024-11-15 11:53:54.148841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.664 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.158739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.158791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.158812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.158824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.158835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.158857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.168789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.168865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.168881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.168893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.168904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.168927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.178766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.178815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.178838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.178850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.178861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.178888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.188783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.188828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.188848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.188860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.188872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.188894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.198832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.198894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.198913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.198924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.198935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.198958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.208883] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.208961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.208977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.208988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.208999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.209022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.218949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.218998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.219018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.219030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.219042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.219064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.228922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.228980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.229001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.229012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.229023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.926 [2024-11-15 11:53:54.229046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.926 qpair failed and we were unable to recover it. 00:30:28.926 [2024-11-15 11:53:54.238968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.926 [2024-11-15 11:53:54.239057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.926 [2024-11-15 11:53:54.239073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.926 [2024-11-15 11:53:54.239084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.926 [2024-11-15 11:53:54.239095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.239118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.248992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.249043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.249063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.249075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.249086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.249109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.259014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.259069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.259090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.259101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.259111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.259134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.269048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.269099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.269120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.269136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.269147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.269171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.279079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.279134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.279156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.279169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.279179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.279202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.289086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.289144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.289166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.289178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.289189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.289212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.299110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.299174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.299192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.299204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.299214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.299237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.309155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.309207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.309230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.309242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.309255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.309282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.319179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.319230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.319250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.319262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.319273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.319295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.329215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.329315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.329341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.329355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.329367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.329396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.339219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.339276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.339305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.339318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.339330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.339357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.349132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.349188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.349214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.349226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.349238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.349262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.359303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.359393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.359411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.359422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.359433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.359456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.927 qpair failed and we were unable to recover it. 00:30:28.927 [2024-11-15 11:53:54.369330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.927 [2024-11-15 11:53:54.369390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.927 [2024-11-15 11:53:54.369411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.927 [2024-11-15 11:53:54.369422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.927 [2024-11-15 11:53:54.369433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.927 [2024-11-15 11:53:54.369456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.928 qpair failed and we were unable to recover it. 00:30:28.928 [2024-11-15 11:53:54.379349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.928 [2024-11-15 11:53:54.379401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.928 [2024-11-15 11:53:54.379424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.928 [2024-11-15 11:53:54.379435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.928 [2024-11-15 11:53:54.379446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.928 [2024-11-15 11:53:54.379470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.928 qpair failed and we were unable to recover it. 00:30:28.928 [2024-11-15 11:53:54.389353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.928 [2024-11-15 11:53:54.389407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.928 [2024-11-15 11:53:54.389427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.928 [2024-11-15 11:53:54.389439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.928 [2024-11-15 11:53:54.389450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.928 [2024-11-15 11:53:54.389473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.928 qpair failed and we were unable to recover it. 00:30:28.928 [2024-11-15 11:53:54.399390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.928 [2024-11-15 11:53:54.399446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.928 [2024-11-15 11:53:54.399470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.928 [2024-11-15 11:53:54.399482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.928 [2024-11-15 11:53:54.399493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.928 [2024-11-15 11:53:54.399516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.928 qpair failed and we were unable to recover it. 00:30:28.928 [2024-11-15 11:53:54.409436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.928 [2024-11-15 11:53:54.409484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.928 [2024-11-15 11:53:54.409505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.928 [2024-11-15 11:53:54.409517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.928 [2024-11-15 11:53:54.409528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.928 [2024-11-15 11:53:54.409551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.928 qpair failed and we were unable to recover it. 00:30:28.928 [2024-11-15 11:53:54.419578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.928 [2024-11-15 11:53:54.419625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.928 [2024-11-15 11:53:54.419645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.928 [2024-11-15 11:53:54.419657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.928 [2024-11-15 11:53:54.419668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:28.928 [2024-11-15 11:53:54.419692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.928 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.429448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.429501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.429522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.429534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.429545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.429573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.439506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.439595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.439611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.439623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.439638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.439662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.449513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.449570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.449590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.449602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.449614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.449637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.459551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.459607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.459627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.459639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.459650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.459673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.469586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.469636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.469657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.469668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.469680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.469702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.479603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.479656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.479678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.479690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.479701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.479725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.489670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.489751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.489767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.489778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.489789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.489812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.499692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.499744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.499765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.191 [2024-11-15 11:53:54.499777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.191 [2024-11-15 11:53:54.499788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.191 [2024-11-15 11:53:54.499811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.191 qpair failed and we were unable to recover it. 00:30:29.191 [2024-11-15 11:53:54.509663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.191 [2024-11-15 11:53:54.509722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.191 [2024-11-15 11:53:54.509741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.509753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.509764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.509787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.519698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.519757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.519776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.519788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.519799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.519823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.529747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.529804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.529830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.529843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.529854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.529876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.539775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.539827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.539848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.539861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.539871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.539894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.549840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.549924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.549940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.549951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.549962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.549985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.559705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.559763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.559783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.559795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.559806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.559829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.569871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.569929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.569950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.569961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.569977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.570001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.579805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.579857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.579878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.579889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.579900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.579924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.589898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.589946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.589965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.589977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.589989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.590011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.599937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.599994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.600015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.600026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.600037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.600060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.609983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.610036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.610056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.610067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.610079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.610101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.619990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.620092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.620108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.620119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.620129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.620152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.630009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.630065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.630086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.630098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.192 [2024-11-15 11:53:54.630109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.192 [2024-11-15 11:53:54.630131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.192 qpair failed and we were unable to recover it. 00:30:29.192 [2024-11-15 11:53:54.639931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.192 [2024-11-15 11:53:54.639984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.192 [2024-11-15 11:53:54.640005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.192 [2024-11-15 11:53:54.640017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.193 [2024-11-15 11:53:54.640029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.193 [2024-11-15 11:53:54.640051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.193 qpair failed and we were unable to recover it. 00:30:29.193 [2024-11-15 11:53:54.649955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.193 [2024-11-15 11:53:54.650010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.193 [2024-11-15 11:53:54.650033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.193 [2024-11-15 11:53:54.650045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.193 [2024-11-15 11:53:54.650056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.193 [2024-11-15 11:53:54.650080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.193 qpair failed and we were unable to recover it. 00:30:29.193 [2024-11-15 11:53:54.660102] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.193 [2024-11-15 11:53:54.660151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.193 [2024-11-15 11:53:54.660176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.193 [2024-11-15 11:53:54.660188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.193 [2024-11-15 11:53:54.660199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.193 [2024-11-15 11:53:54.660222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.193 qpair failed and we were unable to recover it. 00:30:29.193 [2024-11-15 11:53:54.670129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.193 [2024-11-15 11:53:54.670184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.193 [2024-11-15 11:53:54.670205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.193 [2024-11-15 11:53:54.670217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.193 [2024-11-15 11:53:54.670229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.193 [2024-11-15 11:53:54.670252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.193 qpair failed and we were unable to recover it. 00:30:29.193 [2024-11-15 11:53:54.680142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.193 [2024-11-15 11:53:54.680196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.193 [2024-11-15 11:53:54.680218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.193 [2024-11-15 11:53:54.680230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.193 [2024-11-15 11:53:54.680242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.193 [2024-11-15 11:53:54.680265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.193 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.690156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.690216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.690237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.690249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.690261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.690284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.700208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.700261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.700282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.700301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.700313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.700336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.710246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.710312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.710330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.710342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.710353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.710376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.720238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.720288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.720308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.720320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.720331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.720354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.730289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.730349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.730369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.730381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.730392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.730415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.740289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.740336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.740356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.740368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.740379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.740411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.750300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.750374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.750390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.750402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.750413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.750436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.760373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.760431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.760452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.760463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.760474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.760497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.770399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.770500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.770516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.770528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.770539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.770568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.780424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.780478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.780501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.780514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.780525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.780548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.790415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.790469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.790491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.790502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.790513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.790536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.800475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.800536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.800556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.800572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.800583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.800606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.810509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.810568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.810590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.810601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.810612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.810636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.820497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.820587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.820603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.820614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.820625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.820649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.830564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.455 [2024-11-15 11:53:54.830649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.455 [2024-11-15 11:53:54.830664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.455 [2024-11-15 11:53:54.830679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.455 [2024-11-15 11:53:54.830691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.455 [2024-11-15 11:53:54.830713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.455 qpair failed and we were unable to recover it. 00:30:29.455 [2024-11-15 11:53:54.840597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.840688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.840704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.840715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.840726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.840749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.850617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.850668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.850688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.850700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.850711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.850734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.860521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.860580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.860600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.860612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.860624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.860647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.870675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.870747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.870762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.870774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.870784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.870811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.880711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.880770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.880791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.880802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.880813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.880835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.890729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.890788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.890809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.890821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.890832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.890854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.900765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.900830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.900848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.900860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.900870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.900894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.910784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.910834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.910856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.910867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.910879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.910901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.920827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.920882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.920902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.920914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.920925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.920948] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.930853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.930907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.930928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.930939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.930950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.930972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.456 [2024-11-15 11:53:54.940736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.456 [2024-11-15 11:53:54.940786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.456 [2024-11-15 11:53:54.940809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.456 [2024-11-15 11:53:54.940821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.456 [2024-11-15 11:53:54.940832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.456 [2024-11-15 11:53:54.940856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.456 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:54.950884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:54.950935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:54.950956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:54.950968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:54.950979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:54.951002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:54.960899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:54.960955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:54.960979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:54.960990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:54.961001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:54.961024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:54.970945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:54.971046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:54.971062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:54.971073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:54.971084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:54.971107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:54.980948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:54.981000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:54.981022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:54.981034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:54.981045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:54.981069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:54.990983] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:54.991034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:54.991056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:54.991068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:54.991079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:54.991103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:55.000993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:55.001044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:55.001066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:55.001078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:55.001093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:55.001116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:55.011043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:55.011098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:55.011120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:55.011132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:55.011143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:55.011165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:55.021032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:55.021107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:55.021127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:55.021140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:55.021151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:55.021174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:55.031050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:55.031102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:55.031122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:55.031135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:55.031148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.719 [2024-11-15 11:53:55.031172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.719 qpair failed and we were unable to recover it. 00:30:29.719 [2024-11-15 11:53:55.041133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.719 [2024-11-15 11:53:55.041189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.719 [2024-11-15 11:53:55.041209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.719 [2024-11-15 11:53:55.041221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.719 [2024-11-15 11:53:55.041232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.041256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.051166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.051217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.051236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.051248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.051259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.051282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.061155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.061222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.061240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.061251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.061262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.061285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.071204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.071259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.071279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.071291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.071301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.071324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.081231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.081281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.081302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.081314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.081325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.081347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.091150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.091205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.091230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.091241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.091252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.091275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.101153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.101204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.101225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.101238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.101249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.101271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.111180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.111232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.111253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.111265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.111276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.111299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.121343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.121412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.121429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.121441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.121452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.121475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.131268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.131323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.131345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.131357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.131372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.131395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.141262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.141313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.141333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.141345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.141356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.141379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.151388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.151441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.151462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.151474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.151486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.151509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.161445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.161499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.161521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.720 [2024-11-15 11:53:55.161533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.720 [2024-11-15 11:53:55.161544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.720 [2024-11-15 11:53:55.161573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.720 qpair failed and we were unable to recover it. 00:30:29.720 [2024-11-15 11:53:55.171522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.720 [2024-11-15 11:53:55.171579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.720 [2024-11-15 11:53:55.171601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.721 [2024-11-15 11:53:55.171613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.721 [2024-11-15 11:53:55.171624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.721 [2024-11-15 11:53:55.171647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.721 qpair failed and we were unable to recover it. 00:30:29.721 [2024-11-15 11:53:55.181369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.721 [2024-11-15 11:53:55.181422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.721 [2024-11-15 11:53:55.181443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.721 [2024-11-15 11:53:55.181455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.721 [2024-11-15 11:53:55.181466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.721 [2024-11-15 11:53:55.181489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.721 qpair failed and we were unable to recover it. 00:30:29.721 [2024-11-15 11:53:55.191493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.721 [2024-11-15 11:53:55.191540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.721 [2024-11-15 11:53:55.191560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.721 [2024-11-15 11:53:55.191577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.721 [2024-11-15 11:53:55.191588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.721 [2024-11-15 11:53:55.191611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.721 qpair failed and we were unable to recover it. 00:30:29.721 [2024-11-15 11:53:55.201569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.721 [2024-11-15 11:53:55.201623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.721 [2024-11-15 11:53:55.201645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.721 [2024-11-15 11:53:55.201657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.721 [2024-11-15 11:53:55.201668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.721 [2024-11-15 11:53:55.201690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.721 qpair failed and we were unable to recover it. 00:30:29.721 [2024-11-15 11:53:55.211593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.721 [2024-11-15 11:53:55.211654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.721 [2024-11-15 11:53:55.211674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.721 [2024-11-15 11:53:55.211686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.721 [2024-11-15 11:53:55.211697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.721 [2024-11-15 11:53:55.211720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.721 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.221602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.221674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.221693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.221704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.221715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.221738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.231597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.231645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.231664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.231676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.231687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.231710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.241669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.241748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.241764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.241775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.241786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.241809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.251655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.251712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.251732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.251744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.251755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.251778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.261683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.261736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.261757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.261773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.261784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.261806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.271726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.271779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.271802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.271814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.271826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.271851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.281768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.281822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.281841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.281854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.281867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.281891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.291804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.291856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.291876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.291889] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.291900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.291924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.301782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.301828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.301848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.984 [2024-11-15 11:53:55.301860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.984 [2024-11-15 11:53:55.301872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.984 [2024-11-15 11:53:55.301899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.984 qpair failed and we were unable to recover it. 00:30:29.984 [2024-11-15 11:53:55.311810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.984 [2024-11-15 11:53:55.311865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.984 [2024-11-15 11:53:55.311885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.311897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.311908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.311932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.321886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.321939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.321959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.321971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.321982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.322005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.331915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.331972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.331994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.332005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.332016] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.332040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.341844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.341900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.341922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.341933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.341944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.341968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.351954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.352010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.352033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.352045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.352056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.352079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.361984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.362037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.362058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.362070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.362081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.362105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.372008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.372062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.372081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.372096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.372109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.372134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.382022] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.382076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.382097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.382109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.382121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.382143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.391945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.391994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.392015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.392030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.392042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.392065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.402086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.402139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.402161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.402173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.402184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.402207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.412138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.412229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.985 [2024-11-15 11:53:55.412245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.985 [2024-11-15 11:53:55.412257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.985 [2024-11-15 11:53:55.412268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.985 [2024-11-15 11:53:55.412292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.985 qpair failed and we were unable to recover it. 00:30:29.985 [2024-11-15 11:53:55.422038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.985 [2024-11-15 11:53:55.422089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.986 [2024-11-15 11:53:55.422110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.986 [2024-11-15 11:53:55.422125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.986 [2024-11-15 11:53:55.422138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.986 [2024-11-15 11:53:55.422162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.986 qpair failed and we were unable to recover it. 00:30:29.986 [2024-11-15 11:53:55.432164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.986 [2024-11-15 11:53:55.432213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.986 [2024-11-15 11:53:55.432233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.986 [2024-11-15 11:53:55.432245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.986 [2024-11-15 11:53:55.432256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.986 [2024-11-15 11:53:55.432284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.986 qpair failed and we were unable to recover it. 00:30:29.986 [2024-11-15 11:53:55.442182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.986 [2024-11-15 11:53:55.442234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.986 [2024-11-15 11:53:55.442256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.986 [2024-11-15 11:53:55.442268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.986 [2024-11-15 11:53:55.442279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.986 [2024-11-15 11:53:55.442302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.986 qpair failed and we were unable to recover it. 00:30:29.986 [2024-11-15 11:53:55.452240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.986 [2024-11-15 11:53:55.452300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.986 [2024-11-15 11:53:55.452328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.986 [2024-11-15 11:53:55.452342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.986 [2024-11-15 11:53:55.452354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.986 [2024-11-15 11:53:55.452381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.986 qpair failed and we were unable to recover it. 00:30:29.986 [2024-11-15 11:53:55.462241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.986 [2024-11-15 11:53:55.462297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.986 [2024-11-15 11:53:55.462321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.986 [2024-11-15 11:53:55.462334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.986 [2024-11-15 11:53:55.462345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.986 [2024-11-15 11:53:55.462370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.986 qpair failed and we were unable to recover it. 00:30:29.986 [2024-11-15 11:53:55.472268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.986 [2024-11-15 11:53:55.472355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.986 [2024-11-15 11:53:55.472371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.986 [2024-11-15 11:53:55.472383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.986 [2024-11-15 11:53:55.472393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:29.986 [2024-11-15 11:53:55.472418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:29.986 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.482299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.482357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.482378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.482391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.482402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.482425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.492295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.492355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.492377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.492390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.492402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.492428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.502347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.502406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.502426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.502438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.502449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.502472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.512372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.512425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.512446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.512458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.512469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.512492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.522373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.522456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.522481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.522492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.522504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.522527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.532434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.532495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.532517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.532529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.532542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.532571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.542463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.542511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.542531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.542542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.542553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.542583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.552479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.552529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.552550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.552566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.552579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.552602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.562513] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.562575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.562596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.562608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.562623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.562646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.572556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.572614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.572635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.572647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.572658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.572681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.582567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.582626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.249 [2024-11-15 11:53:55.582645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.249 [2024-11-15 11:53:55.582657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.249 [2024-11-15 11:53:55.582668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.249 [2024-11-15 11:53:55.582691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.249 qpair failed and we were unable to recover it. 00:30:30.249 [2024-11-15 11:53:55.592589] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.249 [2024-11-15 11:53:55.592644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.592664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.592676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.592687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.592710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.602645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.602700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.602720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.602732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.602743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.602766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.612647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.612700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.612721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.612733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.612744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.612768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.622616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.622668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.622690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.622702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.622713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.622736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.632587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.632640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.632662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.632673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.632684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.632707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.642739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.642792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.642814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.642826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.642837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.642860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.652656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.652714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.652738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.652750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.652761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.652784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.662793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.662849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.662869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.662881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.662892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.662915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.672706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.672759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.672780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.672792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.672803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.672826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.682858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.682959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.682975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.682987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.682998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.683022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.692765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.692821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.692841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.692853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.692868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.692890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.702788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.702843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.702864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.702876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.702887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.702911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.712933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.713010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.713027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.250 [2024-11-15 11:53:55.713038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.250 [2024-11-15 11:53:55.713049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.250 [2024-11-15 11:53:55.713073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.250 qpair failed and we were unable to recover it. 00:30:30.250 [2024-11-15 11:53:55.723011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.250 [2024-11-15 11:53:55.723088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.250 [2024-11-15 11:53:55.723104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.251 [2024-11-15 11:53:55.723115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.251 [2024-11-15 11:53:55.723126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.251 [2024-11-15 11:53:55.723149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.251 [2024-11-15 11:53:55.732984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.251 [2024-11-15 11:53:55.733048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.251 [2024-11-15 11:53:55.733068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.251 [2024-11-15 11:53:55.733079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.251 [2024-11-15 11:53:55.733091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.251 [2024-11-15 11:53:55.733113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.251 [2024-11-15 11:53:55.743010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.251 [2024-11-15 11:53:55.743075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.251 [2024-11-15 11:53:55.743093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.251 [2024-11-15 11:53:55.743104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.251 [2024-11-15 11:53:55.743116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.251 [2024-11-15 11:53:55.743139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.251 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.752941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.753001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.753021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.753033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.753044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.753067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.763069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.763125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.763146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.763158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.763169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.763193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.773141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.773224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.773245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.773257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.773269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.773292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.783088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.783138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.783157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.783169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.783179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.783202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.793111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.793166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.793187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.793199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.793210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.793232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.803175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.803230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.803251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.803262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.803274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.803296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.813197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.813262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.813291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.813305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.813317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.813345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.823232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.823289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.823317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.823336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.823348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.823376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.833230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.833334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.833360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.833373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.833385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.833413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.843291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.843346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.843369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.843381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.843392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.843417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.853313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.853365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.853387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.853400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.853411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.514 [2024-11-15 11:53:55.853434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.514 qpair failed and we were unable to recover it. 00:30:30.514 [2024-11-15 11:53:55.863320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.514 [2024-11-15 11:53:55.863367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.514 [2024-11-15 11:53:55.863388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.514 [2024-11-15 11:53:55.863401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.514 [2024-11-15 11:53:55.863412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.863440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.873357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.873424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.873443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.873454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.873465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.873489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.883389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.883442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.883464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.883476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.883488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.883511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.893436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.893492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.893513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.893525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.893536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.893560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.903445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.903498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.903520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.903532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.903543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.903571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.913478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.913531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.913554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.913569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.913581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.913604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.923384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.923439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.923462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.923474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.923485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.923507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.933539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.933595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.933616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.933629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.933640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.933663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.943527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.943579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.943600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.943611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.943623] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.943646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.953550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.953612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.953632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.953648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.953659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.953682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.963596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.963654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.963674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.963686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.963698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.963721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.973511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.973568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.973587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.973598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.973610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.515 [2024-11-15 11:53:55.973632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.515 qpair failed and we were unable to recover it. 00:30:30.515 [2024-11-15 11:53:55.983646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.515 [2024-11-15 11:53:55.983695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.515 [2024-11-15 11:53:55.983718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.515 [2024-11-15 11:53:55.983729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.515 [2024-11-15 11:53:55.983740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.516 [2024-11-15 11:53:55.983764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-15 11:53:55.993659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.516 [2024-11-15 11:53:55.993717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.516 [2024-11-15 11:53:55.993737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.516 [2024-11-15 11:53:55.993748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.516 [2024-11-15 11:53:55.993759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.516 [2024-11-15 11:53:55.993788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.516 [2024-11-15 11:53:56.003695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.516 [2024-11-15 11:53:56.003752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.516 [2024-11-15 11:53:56.003773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.516 [2024-11-15 11:53:56.003785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.516 [2024-11-15 11:53:56.003796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.516 [2024-11-15 11:53:56.003820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.516 qpair failed and we were unable to recover it. 00:30:30.778 [2024-11-15 11:53:56.013739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.778 [2024-11-15 11:53:56.013828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.778 [2024-11-15 11:53:56.013844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.778 [2024-11-15 11:53:56.013855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.778 [2024-11-15 11:53:56.013866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.778 [2024-11-15 11:53:56.013889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.778 qpair failed and we were unable to recover it. 00:30:30.778 [2024-11-15 11:53:56.023668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.778 [2024-11-15 11:53:56.023723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.778 [2024-11-15 11:53:56.023743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.778 [2024-11-15 11:53:56.023754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.778 [2024-11-15 11:53:56.023765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.778 [2024-11-15 11:53:56.023788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.778 qpair failed and we were unable to recover it. 00:30:30.778 [2024-11-15 11:53:56.033775] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.778 [2024-11-15 11:53:56.033836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.778 [2024-11-15 11:53:56.033858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.778 [2024-11-15 11:53:56.033870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.778 [2024-11-15 11:53:56.033881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.778 [2024-11-15 11:53:56.033904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.778 qpair failed and we were unable to recover it. 00:30:30.778 [2024-11-15 11:53:56.043795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.778 [2024-11-15 11:53:56.043850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.778 [2024-11-15 11:53:56.043870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.778 [2024-11-15 11:53:56.043882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.778 [2024-11-15 11:53:56.043893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.778 [2024-11-15 11:53:56.043916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.778 qpair failed and we were unable to recover it. 00:30:30.778 [2024-11-15 11:53:56.053844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.778 [2024-11-15 11:53:56.053902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.778 [2024-11-15 11:53:56.053924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.778 [2024-11-15 11:53:56.053935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.053947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.053970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.063856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.063907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.063927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.063939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.063951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.063974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.073885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.073936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.073956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.073968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.073979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.074002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.083946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.084000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.084025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.084037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.084048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.084072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.093933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.093987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.094009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.094021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.094032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.094055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.104016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.104071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.104091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.104103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.104114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.104137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.113979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.114033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.114054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.114065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.114076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.114099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.124013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.124069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.124088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.124100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.124115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.124138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.134063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.134122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.134143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.134154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.134165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.134189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.144066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.144117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.144139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.144150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.144161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.144184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.154071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.154150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.154166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.154177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.154188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.154212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.164127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.779 [2024-11-15 11:53:56.164179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.779 [2024-11-15 11:53:56.164200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.779 [2024-11-15 11:53:56.164213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.779 [2024-11-15 11:53:56.164223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.779 [2024-11-15 11:53:56.164248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.779 qpair failed and we were unable to recover it. 00:30:30.779 [2024-11-15 11:53:56.174167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.174232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.174253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.174264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.174275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.174298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.184182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.184283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.184299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.184309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.184320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.184344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.194078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.194133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.194155] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.194166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.194177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.194200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.204237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.204291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.204313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.204325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.204336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.204360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.214257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.214313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.214351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.214365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.214377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.214405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.224336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.224391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.224419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.224432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.224445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.224472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.234311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.234363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.234386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.234399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.234409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.234435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.244326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.244421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.244438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.244450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.244461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.244484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.254372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.254439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.254457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.254468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.254484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.254508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:30.780 [2024-11-15 11:53:56.264291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.780 [2024-11-15 11:53:56.264352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.780 [2024-11-15 11:53:56.264374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.780 [2024-11-15 11:53:56.264387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.780 [2024-11-15 11:53:56.264398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:30.780 [2024-11-15 11:53:56.264421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:30.780 qpair failed and we were unable to recover it. 00:30:31.043 [2024-11-15 11:53:56.274414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.043 [2024-11-15 11:53:56.274496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.043 [2024-11-15 11:53:56.274516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.043 [2024-11-15 11:53:56.274529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.043 [2024-11-15 11:53:56.274540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.043 [2024-11-15 11:53:56.274568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-11-15 11:53:56.284408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.043 [2024-11-15 11:53:56.284465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.043 [2024-11-15 11:53:56.284485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.043 [2024-11-15 11:53:56.284498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.043 [2024-11-15 11:53:56.284510] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.043 [2024-11-15 11:53:56.284534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-11-15 11:53:56.294478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.043 [2024-11-15 11:53:56.294545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.043 [2024-11-15 11:53:56.294569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.043 [2024-11-15 11:53:56.294581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.043 [2024-11-15 11:53:56.294592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.043 [2024-11-15 11:53:56.294615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-11-15 11:53:56.304454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.043 [2024-11-15 11:53:56.304502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.043 [2024-11-15 11:53:56.304524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.043 [2024-11-15 11:53:56.304536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.043 [2024-11-15 11:53:56.304547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.043 [2024-11-15 11:53:56.304575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-11-15 11:53:56.314470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.043 [2024-11-15 11:53:56.314550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.043 [2024-11-15 11:53:56.314570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.043 [2024-11-15 11:53:56.314581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.043 [2024-11-15 11:53:56.314592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.043 [2024-11-15 11:53:56.314615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.043 qpair failed and we were unable to recover it. 00:30:31.043 [2024-11-15 11:53:56.324569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.043 [2024-11-15 11:53:56.324623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.324642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.324654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.324665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.324688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.334591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.334689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.334706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.334718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.334729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.334753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.344587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.344654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.344672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.344684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.344695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.344719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.354592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.354642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.354663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.354675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.354687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.354710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.364656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.364711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.364733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.364745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.364756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.364780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.374686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.374742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.374763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.374775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.374786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.374809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.384715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.384768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.384789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.384806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.384817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.384840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.394728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.394779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.394801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.394812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.394823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.394846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.404747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.404800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.404821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.404834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.404845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.404868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.414799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.414862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.414881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.414893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.414904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.414927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.424813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.424908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.424924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.424935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.424947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.424975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.434697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.434760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.434778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.434789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.434800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.434823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.444837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.444889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.044 [2024-11-15 11:53:56.444911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.044 [2024-11-15 11:53:56.444923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.044 [2024-11-15 11:53:56.444934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.044 [2024-11-15 11:53:56.444957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.044 qpair failed and we were unable to recover it. 00:30:31.044 [2024-11-15 11:53:56.454904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.044 [2024-11-15 11:53:56.454957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.454978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.454990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.455001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.455024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.464900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.464974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.464990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.465001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.465012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.465036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.474910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.474965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.474986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.474998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.475009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.475031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.485001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.485090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.485106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.485117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.485127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.485150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.494875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.494933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.494953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.494965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.494976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.494999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.505015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.505067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.505090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.505103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.505114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.505138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.515033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.515096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.515117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.515132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.515143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.515166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.525070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.525125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.525144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.525155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.525166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.525189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.045 [2024-11-15 11:53:56.535111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.045 [2024-11-15 11:53:56.535163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.045 [2024-11-15 11:53:56.535182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.045 [2024-11-15 11:53:56.535194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.045 [2024-11-15 11:53:56.535205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.045 [2024-11-15 11:53:56.535228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.045 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.545125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.545177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.545199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.545211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.307 [2024-11-15 11:53:56.545222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.307 [2024-11-15 11:53:56.545246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.307 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.555145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.555196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.555218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.555230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.307 [2024-11-15 11:53:56.555241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.307 [2024-11-15 11:53:56.555269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.307 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.565190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.565245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.565264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.565276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.307 [2024-11-15 11:53:56.565288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.307 [2024-11-15 11:53:56.565310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.307 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.575222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.575278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.575307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.575320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.307 [2024-11-15 11:53:56.575333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.307 [2024-11-15 11:53:56.575360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.307 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.585217] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.585306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.585334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.585347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.307 [2024-11-15 11:53:56.585358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.307 [2024-11-15 11:53:56.585386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.307 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.595257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.595316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.595344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.595357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.307 [2024-11-15 11:53:56.595369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.307 [2024-11-15 11:53:56.595396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.307 qpair failed and we were unable to recover it. 00:30:31.307 [2024-11-15 11:53:56.605247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.307 [2024-11-15 11:53:56.605301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.307 [2024-11-15 11:53:56.605324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.307 [2024-11-15 11:53:56.605336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.605347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.308 [2024-11-15 11:53:56.605372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.615310] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.615382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.615400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.615412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.615423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.308 [2024-11-15 11:53:56.615447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.625454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.625509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.625542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.625554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.625572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.308 [2024-11-15 11:53:56.625603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.635356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.635406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.635427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.635439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.635450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.308 [2024-11-15 11:53:56.635473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.645405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.645503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.645523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.645534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.645545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa108000b90 00:30:31.308 [2024-11-15 11:53:56.645573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Write completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 Read completed with error (sct=0, sc=8) 00:30:31.308 starting I/O failed 00:30:31.308 [2024-11-15 11:53:56.646496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.308 [2024-11-15 11:53:56.655410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.655508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.655557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.655593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.655615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0fc000b90 00:30:31.308 [2024-11-15 11:53:56.655663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.665457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.665522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.665559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.665587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.665602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0fc000b90 00:30:31.308 [2024-11-15 11:53:56.665634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.675470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.675527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.675547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.675557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.675572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa0fc000b90 00:30:31.308 [2024-11-15 11:53:56.675594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.685515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.685619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.685707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.685734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.685755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x80b0c0 00:30:31.308 [2024-11-15 11:53:56.685810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.308 qpair failed and we were unable to recover it. 00:30:31.308 [2024-11-15 11:53:56.695542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.308 [2024-11-15 11:53:56.695649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.308 [2024-11-15 11:53:56.695677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.308 [2024-11-15 11:53:56.695692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.308 [2024-11-15 11:53:56.695705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x80b0c0 00:30:31.308 [2024-11-15 11:53:56.695734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:31.309 qpair failed and we were unable to recover it. 00:30:31.309 [2024-11-15 11:53:56.705544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.309 [2024-11-15 11:53:56.705667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.309 [2024-11-15 11:53:56.705735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.309 [2024-11-15 11:53:56.705762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.309 [2024-11-15 11:53:56.705785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa100000b90 00:30:31.309 [2024-11-15 11:53:56.705851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.309 qpair failed and we were unable to recover it. 00:30:31.309 [2024-11-15 11:53:56.715577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.309 [2024-11-15 11:53:56.715655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.309 [2024-11-15 11:53:56.715687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.309 [2024-11-15 11:53:56.715704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.309 [2024-11-15 11:53:56.715719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa100000b90 00:30:31.309 [2024-11-15 11:53:56.715755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.309 qpair failed and we were unable to recover it. 00:30:31.309 [2024-11-15 11:53:56.715938] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:31.309 A controller has encountered a failure and is being reset. 00:30:31.309 [2024-11-15 11:53:56.716047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x800e00 (9): Bad file descriptor 00:30:31.570 Controller properly reset. 00:30:31.570 Initializing NVMe Controllers 00:30:31.570 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:31.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:31.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:31.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:31.570 Initialization complete. Launching workers. 00:30:31.570 Starting thread on core 1 00:30:31.570 Starting thread on core 2 00:30:31.570 Starting thread on core 3 00:30:31.570 Starting thread on core 0 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:31.570 00:30:31.570 real 0m11.645s 00:30:31.570 user 0m21.738s 00:30:31.570 sys 0m3.921s 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:31.570 ************************************ 00:30:31.570 END TEST nvmf_target_disconnect_tc2 00:30:31.570 ************************************ 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:31.570 rmmod nvme_tcp 00:30:31.570 rmmod nvme_fabrics 00:30:31.570 rmmod nvme_keyring 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1265050 ']' 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1265050 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 1265050 ']' 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 1265050 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:31.570 11:53:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1265050 00:30:31.570 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:30:31.570 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:30:31.570 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1265050' 00:30:31.570 killing process with pid 1265050 00:30:31.570 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 1265050 00:30:31.570 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 1265050 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:31.832 11:53:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.746 11:53:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:33.746 00:30:33.746 real 0m22.037s 00:30:33.746 user 0m50.331s 00:30:33.746 sys 0m10.128s 00:30:33.746 11:53:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:33.746 11:53:59 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:33.746 ************************************ 00:30:33.746 END TEST nvmf_target_disconnect 00:30:33.746 ************************************ 00:30:34.008 11:53:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:34.008 00:30:34.008 real 6m34.782s 00:30:34.008 user 11m35.420s 00:30:34.008 sys 2m15.678s 00:30:34.008 11:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:34.008 11:53:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:34.008 ************************************ 00:30:34.008 END TEST nvmf_host 00:30:34.008 ************************************ 00:30:34.008 11:53:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:34.008 11:53:59 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:34.008 11:53:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:34.008 11:53:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:34.008 11:53:59 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.008 11:53:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:34.008 ************************************ 00:30:34.008 START TEST nvmf_target_core_interrupt_mode 00:30:34.008 ************************************ 00:30:34.008 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:34.008 * Looking for test storage... 00:30:34.008 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:34.008 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:34.008 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:34.008 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.269 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:34.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.270 --rc genhtml_branch_coverage=1 00:30:34.270 --rc genhtml_function_coverage=1 00:30:34.270 --rc genhtml_legend=1 00:30:34.270 --rc geninfo_all_blocks=1 00:30:34.270 --rc geninfo_unexecuted_blocks=1 00:30:34.270 00:30:34.270 ' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:34.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.270 --rc genhtml_branch_coverage=1 00:30:34.270 --rc genhtml_function_coverage=1 00:30:34.270 --rc genhtml_legend=1 00:30:34.270 --rc geninfo_all_blocks=1 00:30:34.270 --rc geninfo_unexecuted_blocks=1 00:30:34.270 00:30:34.270 ' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:34.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.270 --rc genhtml_branch_coverage=1 00:30:34.270 --rc genhtml_function_coverage=1 00:30:34.270 --rc genhtml_legend=1 00:30:34.270 --rc geninfo_all_blocks=1 00:30:34.270 --rc geninfo_unexecuted_blocks=1 00:30:34.270 00:30:34.270 ' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:34.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.270 --rc genhtml_branch_coverage=1 00:30:34.270 --rc genhtml_function_coverage=1 00:30:34.270 --rc genhtml_legend=1 00:30:34.270 --rc geninfo_all_blocks=1 00:30:34.270 --rc geninfo_unexecuted_blocks=1 00:30:34.270 00:30:34.270 ' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:34.270 ************************************ 00:30:34.270 START TEST nvmf_abort 00:30:34.270 ************************************ 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:34.270 * Looking for test storage... 00:30:34.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:34.270 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:34.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.532 --rc genhtml_branch_coverage=1 00:30:34.532 --rc genhtml_function_coverage=1 00:30:34.532 --rc genhtml_legend=1 00:30:34.532 --rc geninfo_all_blocks=1 00:30:34.532 --rc geninfo_unexecuted_blocks=1 00:30:34.532 00:30:34.532 ' 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:34.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.532 --rc genhtml_branch_coverage=1 00:30:34.532 --rc genhtml_function_coverage=1 00:30:34.532 --rc genhtml_legend=1 00:30:34.532 --rc geninfo_all_blocks=1 00:30:34.532 --rc geninfo_unexecuted_blocks=1 00:30:34.532 00:30:34.532 ' 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:34.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.532 --rc genhtml_branch_coverage=1 00:30:34.532 --rc genhtml_function_coverage=1 00:30:34.532 --rc genhtml_legend=1 00:30:34.532 --rc geninfo_all_blocks=1 00:30:34.532 --rc geninfo_unexecuted_blocks=1 00:30:34.532 00:30:34.532 ' 00:30:34.532 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:34.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:34.532 --rc genhtml_branch_coverage=1 00:30:34.532 --rc genhtml_function_coverage=1 00:30:34.533 --rc genhtml_legend=1 00:30:34.533 --rc geninfo_all_blocks=1 00:30:34.533 --rc geninfo_unexecuted_blocks=1 00:30:34.533 00:30:34.533 ' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:34.533 11:53:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:42.672 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:42.672 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:42.672 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:42.672 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.672 11:54:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:42.672 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:42.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.659 ms 00:30:42.673 00:30:42.673 --- 10.0.0.2 ping statistics --- 00:30:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.673 rtt min/avg/max/mdev = 0.659/0.659/0.659/0.000 ms 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:30:42.673 00:30:42.673 --- 10.0.0.1 ping statistics --- 00:30:42.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.673 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1270794 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1270794 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1270794 ']' 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:42.673 11:54:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.673 [2024-11-15 11:54:07.414844] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:42.673 [2024-11-15 11:54:07.415983] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:30:42.673 [2024-11-15 11:54:07.416033] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.673 [2024-11-15 11:54:07.515032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:42.673 [2024-11-15 11:54:07.566585] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.673 [2024-11-15 11:54:07.566637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.673 [2024-11-15 11:54:07.566648] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:42.673 [2024-11-15 11:54:07.566658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:42.673 [2024-11-15 11:54:07.566667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.673 [2024-11-15 11:54:07.568817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:42.673 [2024-11-15 11:54:07.568976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:42.673 [2024-11-15 11:54:07.568978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:42.673 [2024-11-15 11:54:07.647986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:42.673 [2024-11-15 11:54:07.649122] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:42.673 [2024-11-15 11:54:07.649547] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:42.673 [2024-11-15 11:54:07.649701] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 [2024-11-15 11:54:08.281955] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 Malloc0 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 Delay0 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 [2024-11-15 11:54:08.385917] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:42.935 11:54:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:43.196 [2024-11-15 11:54:08.566781] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:45.741 Initializing NVMe Controllers 00:30:45.741 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:45.741 controller IO queue size 128 less than required 00:30:45.741 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:45.741 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:45.741 Initialization complete. Launching workers. 00:30:45.741 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 27428 00:30:45.741 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 27489, failed to submit 66 00:30:45.741 success 27428, unsuccessful 61, failed 0 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:45.741 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:45.742 rmmod nvme_tcp 00:30:45.742 rmmod nvme_fabrics 00:30:45.742 rmmod nvme_keyring 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1270794 ']' 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1270794 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1270794 ']' 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1270794 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1270794 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1270794' 00:30:45.742 killing process with pid 1270794 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1270794 00:30:45.742 11:54:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1270794 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:45.742 11:54:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:47.654 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:47.654 00:30:47.654 real 0m13.493s 00:30:47.654 user 0m11.403s 00:30:47.654 sys 0m6.841s 00:30:47.654 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:47.654 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:47.654 ************************************ 00:30:47.654 END TEST nvmf_abort 00:30:47.654 ************************************ 00:30:47.914 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:47.914 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:47.914 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:47.914 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:47.915 ************************************ 00:30:47.915 START TEST nvmf_ns_hotplug_stress 00:30:47.915 ************************************ 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:47.915 * Looking for test storage... 00:30:47.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:47.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.915 --rc genhtml_branch_coverage=1 00:30:47.915 --rc genhtml_function_coverage=1 00:30:47.915 --rc genhtml_legend=1 00:30:47.915 --rc geninfo_all_blocks=1 00:30:47.915 --rc geninfo_unexecuted_blocks=1 00:30:47.915 00:30:47.915 ' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:47.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.915 --rc genhtml_branch_coverage=1 00:30:47.915 --rc genhtml_function_coverage=1 00:30:47.915 --rc genhtml_legend=1 00:30:47.915 --rc geninfo_all_blocks=1 00:30:47.915 --rc geninfo_unexecuted_blocks=1 00:30:47.915 00:30:47.915 ' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:47.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.915 --rc genhtml_branch_coverage=1 00:30:47.915 --rc genhtml_function_coverage=1 00:30:47.915 --rc genhtml_legend=1 00:30:47.915 --rc geninfo_all_blocks=1 00:30:47.915 --rc geninfo_unexecuted_blocks=1 00:30:47.915 00:30:47.915 ' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:47.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:47.915 --rc genhtml_branch_coverage=1 00:30:47.915 --rc genhtml_function_coverage=1 00:30:47.915 --rc genhtml_legend=1 00:30:47.915 --rc geninfo_all_blocks=1 00:30:47.915 --rc geninfo_unexecuted_blocks=1 00:30:47.915 00:30:47.915 ' 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:47.915 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.176 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:48.177 11:54:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:56.315 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:56.315 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:56.315 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:56.315 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:56.315 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:56.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:30:56.316 00:30:56.316 --- 10.0.0.2 ping statistics --- 00:30:56.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.316 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:30:56.316 00:30:56.316 --- 10.0.0.1 ping statistics --- 00:30:56.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.316 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1275953 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1275953 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1275953 ']' 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:56.316 11:54:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.316 [2024-11-15 11:54:20.958476] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.316 [2024-11-15 11:54:20.959611] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:30:56.316 [2024-11-15 11:54:20.959661] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.316 [2024-11-15 11:54:21.058446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:56.316 [2024-11-15 11:54:21.110624] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.316 [2024-11-15 11:54:21.110676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.316 [2024-11-15 11:54:21.110688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.316 [2024-11-15 11:54:21.110698] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.316 [2024-11-15 11:54:21.110706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.316 [2024-11-15 11:54:21.112848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.316 [2024-11-15 11:54:21.113010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.316 [2024-11-15 11:54:21.113011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.316 [2024-11-15 11:54:21.192736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:56.316 [2024-11-15 11:54:21.193967] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:56.316 [2024-11-15 11:54:21.194342] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:56.316 [2024-11-15 11:54:21.194492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:56.316 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:56.316 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:56.316 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:56.316 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:56.316 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:56.586 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.586 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:56.586 11:54:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:56.586 [2024-11-15 11:54:21.978082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.586 11:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:56.849 11:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:56.849 [2024-11-15 11:54:22.342696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.110 11:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:57.110 11:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:57.372 Malloc0 00:30:57.372 11:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:57.632 Delay0 00:30:57.632 11:54:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.632 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:57.893 NULL1 00:30:57.893 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:58.154 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:58.154 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1276607 00:30:58.154 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:30:58.154 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.154 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.415 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:58.415 11:54:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:58.675 true 00:30:58.675 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:30:58.675 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.936 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.936 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:58.936 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:59.196 true 00:30:59.196 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:30:59.196 11:54:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.585 Read completed with error (sct=0, sc=11) 00:31:00.585 11:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.585 11:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:31:00.585 11:54:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:31:00.845 true 00:31:00.845 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:00.845 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.845 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.107 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:31:01.107 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:31:01.368 true 00:31:01.368 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:01.368 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.368 11:54:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:01.629 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:31:01.629 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:31:01.889 true 00:31:01.889 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:01.889 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.149 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.149 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:31:02.149 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:31:02.410 true 00:31:02.410 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:02.410 11:54:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.793 11:54:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.793 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:03.794 11:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:03.794 11:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:04.054 true 00:31:04.054 11:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:04.054 11:54:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.994 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.994 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:04.994 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:05.254 true 00:31:05.254 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:05.254 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.254 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.514 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:05.514 11:54:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:05.773 true 00:31:05.773 11:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:05.773 11:54:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:06.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:06.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:06.764 11:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:06.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:06.764 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.061 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.061 11:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:07.061 11:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:07.061 true 00:31:07.061 11:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:07.061 11:54:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.013 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.013 11:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.274 11:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:08.274 11:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:08.274 true 00:31:08.274 11:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:08.274 11:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.535 11:54:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.795 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:08.796 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:08.796 true 00:31:08.796 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:08.796 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.056 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.056 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.316 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:09.316 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:09.316 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:09.316 true 00:31:09.576 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:09.576 11:54:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.148 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.148 11:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.409 11:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:10.409 11:54:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:10.669 true 00:31:10.669 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:10.669 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.931 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.931 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:10.931 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:11.191 true 00:31:11.192 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:11.192 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:11.452 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:11.452 11:54:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:11.713 true 00:31:11.713 11:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:11.713 11:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.656 11:54:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.656 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:12.656 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:12.917 true 00:31:12.917 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:12.917 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:13.178 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:13.439 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:13.439 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:13.439 true 00:31:13.439 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:13.439 11:54:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 11:54:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.826 11:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:14.826 11:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:14.826 true 00:31:15.086 11:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:15.086 11:54:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.029 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.029 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:16.029 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:16.029 true 00:31:16.290 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:16.290 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:16.290 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.551 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:16.551 11:54:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:16.551 true 00:31:16.812 11:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:16.812 11:54:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.755 11:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.015 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.015 11:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:18.015 11:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:18.275 true 00:31:18.275 11:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:18.275 11:54:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.218 11:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.218 11:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:19.218 11:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:19.479 true 00:31:19.479 11:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:19.479 11:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.479 11:54:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.739 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:19.740 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:20.000 true 00:31:20.000 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:20.001 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.001 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.261 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:20.261 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:20.522 true 00:31:20.522 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:20.522 11:54:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:21.466 11:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.466 11:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:21.466 11:54:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:21.726 true 00:31:21.726 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:21.726 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:21.726 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:21.987 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:21.987 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:22.248 true 00:31:22.248 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:22.248 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.509 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.509 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:22.509 11:54:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:22.771 true 00:31:22.771 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:22.771 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:23.040 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:23.040 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:31:23.040 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:31:23.304 true 00:31:23.304 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:23.304 11:54:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 11:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:24.689 11:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:31:24.689 11:54:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:31:24.689 true 00:31:24.689 11:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:24.689 11:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:25.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.630 11:54:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:25.630 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:25.889 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:31:25.889 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:31:25.889 true 00:31:25.889 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:25.889 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.149 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.409 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:31:26.409 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:31:26.409 true 00:31:26.669 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:26.669 11:54:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.669 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:26.929 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:31:26.929 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:31:27.190 true 00:31:27.190 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:27.190 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.190 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:27.452 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:31:27.452 11:54:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:31:27.713 true 00:31:27.713 11:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:27.713 11:54:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.654 Initializing NVMe Controllers 00:31:28.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.654 Controller IO queue size 128, less than required. 00:31:28.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.654 Controller IO queue size 128, less than required. 00:31:28.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:28.654 Initialization complete. Launching workers. 00:31:28.654 ======================================================== 00:31:28.654 Latency(us) 00:31:28.654 Device Information : IOPS MiB/s Average min max 00:31:28.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2301.05 1.12 32096.81 1566.33 1008087.32 00:31:28.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16855.32 8.23 7568.75 1141.03 401118.16 00:31:28.654 ======================================================== 00:31:28.654 Total : 19156.37 9.35 10515.04 1141.03 1008087.32 00:31:28.654 00:31:28.654 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:28.915 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:31:28.915 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:31:29.177 true 00:31:29.177 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1276607 00:31:29.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1276607) - No such process 00:31:29.177 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1276607 00:31:29.177 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.443 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.443 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:29.443 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:29.443 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:29.443 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.443 11:54:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:29.704 null0 00:31:29.704 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.704 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.704 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:29.704 null1 00:31:29.704 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.704 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.705 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:29.965 null2 00:31:29.965 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.965 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.965 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:29.965 null3 00:31:29.965 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:29.965 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:29.965 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:30.225 null4 00:31:30.226 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.226 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.226 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:30.486 null5 00:31:30.486 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.486 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.486 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:30.486 null6 00:31:30.486 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.486 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.486 11:54:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:30.748 null7 00:31:30.748 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:30.748 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:30.748 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:30.748 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.748 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1282752 1282754 1282755 1282757 1282759 1282761 1282763 1282765 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.749 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.010 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.011 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.011 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.011 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.011 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.011 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.272 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:31.533 11:54:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:31.533 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:31.533 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.533 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.533 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:31.794 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.054 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.054 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.054 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.054 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.055 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.315 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.316 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:32.577 11:54:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:32.577 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:32.866 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.126 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.127 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.388 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.649 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.650 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.650 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.650 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.650 11:54:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.650 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:33.911 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.172 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:34.432 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.433 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.693 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.693 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.693 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.693 11:54:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:34.693 rmmod nvme_tcp 00:31:34.693 rmmod nvme_fabrics 00:31:34.693 rmmod nvme_keyring 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1275953 ']' 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1275953 00:31:34.693 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1275953 ']' 00:31:34.694 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1275953 00:31:34.694 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:31:34.694 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:34.694 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1275953 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1275953' 00:31:34.954 killing process with pid 1275953 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1275953 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1275953 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:34.954 11:55:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.503 00:31:37.503 real 0m49.217s 00:31:37.503 user 2m59.259s 00:31:37.503 sys 0m20.499s 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:37.503 ************************************ 00:31:37.503 END TEST nvmf_ns_hotplug_stress 00:31:37.503 ************************************ 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:37.503 ************************************ 00:31:37.503 START TEST nvmf_delete_subsystem 00:31:37.503 ************************************ 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:37.503 * Looking for test storage... 00:31:37.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:37.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.503 --rc genhtml_branch_coverage=1 00:31:37.503 --rc genhtml_function_coverage=1 00:31:37.503 --rc genhtml_legend=1 00:31:37.503 --rc geninfo_all_blocks=1 00:31:37.503 --rc geninfo_unexecuted_blocks=1 00:31:37.503 00:31:37.503 ' 00:31:37.503 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:37.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.503 --rc genhtml_branch_coverage=1 00:31:37.503 --rc genhtml_function_coverage=1 00:31:37.503 --rc genhtml_legend=1 00:31:37.504 --rc geninfo_all_blocks=1 00:31:37.504 --rc geninfo_unexecuted_blocks=1 00:31:37.504 00:31:37.504 ' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:37.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.504 --rc genhtml_branch_coverage=1 00:31:37.504 --rc genhtml_function_coverage=1 00:31:37.504 --rc genhtml_legend=1 00:31:37.504 --rc geninfo_all_blocks=1 00:31:37.504 --rc geninfo_unexecuted_blocks=1 00:31:37.504 00:31:37.504 ' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:37.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.504 --rc genhtml_branch_coverage=1 00:31:37.504 --rc genhtml_function_coverage=1 00:31:37.504 --rc genhtml_legend=1 00:31:37.504 --rc geninfo_all_blocks=1 00:31:37.504 --rc geninfo_unexecuted_blocks=1 00:31:37.504 00:31:37.504 ' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.504 11:55:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:45.651 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:45.652 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:45.652 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:45.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:45.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:45.652 11:55:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:45.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:45.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:31:45.652 00:31:45.652 --- 10.0.0.2 ping statistics --- 00:31:45.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.652 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:45.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:45.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:31:45.652 00:31:45.652 --- 10.0.0.1 ping statistics --- 00:31:45.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:45.652 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1287914 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1287914 00:31:45.652 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:45.653 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1287914 ']' 00:31:45.653 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:45.653 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:45.653 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:45.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:45.653 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:45.653 11:55:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.653 [2024-11-15 11:55:10.276459] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:45.653 [2024-11-15 11:55:10.277597] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:31:45.653 [2024-11-15 11:55:10.277647] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:45.653 [2024-11-15 11:55:10.375371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:45.653 [2024-11-15 11:55:10.426673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:45.653 [2024-11-15 11:55:10.426720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:45.653 [2024-11-15 11:55:10.426730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:45.653 [2024-11-15 11:55:10.426737] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:45.653 [2024-11-15 11:55:10.426743] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:45.653 [2024-11-15 11:55:10.428426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:45.653 [2024-11-15 11:55:10.428429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.653 [2024-11-15 11:55:10.506289] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:45.653 [2024-11-15 11:55:10.506804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:45.653 [2024-11-15 11:55:10.507179] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.653 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.653 [2024-11-15 11:55:11.145522] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.914 [2024-11-15 11:55:11.178014] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.914 NULL1 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.914 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.914 Delay0 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1288098 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:45.915 11:55:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:45.915 [2024-11-15 11:55:11.304554] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:47.828 11:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:47.828 11:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:47.828 11:55:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 starting I/O failed: -6 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 [2024-11-15 11:55:13.442496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11002c0 is same with the state(6) to be set 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Write completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.090 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 starting I/O failed: -6 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 Read completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 Write completed with error (sct=0, sc=8) 00:31:48.091 starting I/O failed: -6 00:31:48.091 [2024-11-15 11:55:13.446845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb980000c40 is same with the state(6) to be set 00:31:49.034 [2024-11-15 11:55:14.405428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11019a0 is same with the state(6) to be set 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 [2024-11-15 11:55:14.446043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11004a0 is same with the state(6) to be set 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Write completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 [2024-11-15 11:55:14.446736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1100860 is same with the state(6) to be set 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.034 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 [2024-11-15 11:55:14.447876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb98000d020 is same with the state(6) to be set 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Read completed with error (sct=0, sc=8) 00:31:49.035 Write completed with error (sct=0, sc=8) 00:31:49.035 [2024-11-15 11:55:14.447996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb98000d7c0 is same with the state(6) to be set 00:31:49.035 Initializing NVMe Controllers 00:31:49.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:49.035 Controller IO queue size 128, less than required. 00:31:49.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:49.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:49.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:49.035 Initialization complete. Launching workers. 00:31:49.035 ======================================================== 00:31:49.035 Latency(us) 00:31:49.035 Device Information : IOPS MiB/s Average min max 00:31:49.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164.65 0.08 907490.50 402.93 1008367.23 00:31:49.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.59 0.09 920191.41 457.95 1011729.53 00:31:49.035 ======================================================== 00:31:49.035 Total : 341.23 0.17 914063.13 402.93 1011729.53 00:31:49.035 00:31:49.035 [2024-11-15 11:55:14.448404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11019a0 (9): Bad file descriptor 00:31:49.035 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:49.035 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.035 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:49.035 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1288098 00:31:49.035 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:49.606 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:49.606 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1288098 00:31:49.606 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1288098) - No such process 00:31:49.606 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1288098 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1288098 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1288098 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.607 [2024-11-15 11:55:14.981873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1288828 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:49.607 11:55:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:49.607 [2024-11-15 11:55:15.079445] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:50.176 11:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:50.176 11:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:50.176 11:55:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:50.747 11:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:50.747 11:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:50.747 11:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:51.318 11:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:51.318 11:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:51.318 11:55:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:51.581 11:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:51.581 11:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:51.581 11:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:52.151 11:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:52.151 11:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:52.151 11:55:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:52.721 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:52.721 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:52.721 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:52.981 Initializing NVMe Controllers 00:31:52.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:52.981 Controller IO queue size 128, less than required. 00:31:52.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:52.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:52.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:52.981 Initialization complete. Launching workers. 00:31:52.981 ======================================================== 00:31:52.981 Latency(us) 00:31:52.981 Device Information : IOPS MiB/s Average min max 00:31:52.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002410.98 1000364.20 1005903.73 00:31:52.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004429.23 1000437.21 1041621.29 00:31:52.981 ======================================================== 00:31:52.981 Total : 256.00 0.12 1003420.10 1000364.20 1041621.29 00:31:52.981 00:31:53.242 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1288828 00:31:53.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1288828) - No such process 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1288828 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:53.243 rmmod nvme_tcp 00:31:53.243 rmmod nvme_fabrics 00:31:53.243 rmmod nvme_keyring 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1287914 ']' 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1287914 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1287914 ']' 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1287914 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1287914 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1287914' 00:31:53.243 killing process with pid 1287914 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1287914 00:31:53.243 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1287914 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:53.503 11:55:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:55.413 00:31:55.413 real 0m18.369s 00:31:55.413 user 0m26.671s 00:31:55.413 sys 0m7.445s 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:55.413 ************************************ 00:31:55.413 END TEST nvmf_delete_subsystem 00:31:55.413 ************************************ 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:55.413 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:55.673 ************************************ 00:31:55.673 START TEST nvmf_host_management 00:31:55.673 ************************************ 00:31:55.673 11:55:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:55.673 * Looking for test storage... 00:31:55.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:55.673 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:55.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.673 --rc genhtml_branch_coverage=1 00:31:55.673 --rc genhtml_function_coverage=1 00:31:55.673 --rc genhtml_legend=1 00:31:55.674 --rc geninfo_all_blocks=1 00:31:55.674 --rc geninfo_unexecuted_blocks=1 00:31:55.674 00:31:55.674 ' 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:55.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.674 --rc genhtml_branch_coverage=1 00:31:55.674 --rc genhtml_function_coverage=1 00:31:55.674 --rc genhtml_legend=1 00:31:55.674 --rc geninfo_all_blocks=1 00:31:55.674 --rc geninfo_unexecuted_blocks=1 00:31:55.674 00:31:55.674 ' 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:55.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.674 --rc genhtml_branch_coverage=1 00:31:55.674 --rc genhtml_function_coverage=1 00:31:55.674 --rc genhtml_legend=1 00:31:55.674 --rc geninfo_all_blocks=1 00:31:55.674 --rc geninfo_unexecuted_blocks=1 00:31:55.674 00:31:55.674 ' 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:55.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:55.674 --rc genhtml_branch_coverage=1 00:31:55.674 --rc genhtml_function_coverage=1 00:31:55.674 --rc genhtml_legend=1 00:31:55.674 --rc geninfo_all_blocks=1 00:31:55.674 --rc geninfo_unexecuted_blocks=1 00:31:55.674 00:31:55.674 ' 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:55.674 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:55.935 11:55:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.074 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:04.075 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:04.075 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:04.075 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:04.075 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:04.075 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:04.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:32:04.076 00:32:04.076 --- 10.0.0.2 ping statistics --- 00:32:04.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.076 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:32:04.076 00:32:04.076 --- 10.0.0.1 ping statistics --- 00:32:04.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.076 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1293611 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1293611 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1293611 ']' 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:04.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:04.076 11:55:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.076 [2024-11-15 11:55:28.773606] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:04.076 [2024-11-15 11:55:28.774758] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:32:04.076 [2024-11-15 11:55:28.774810] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:04.076 [2024-11-15 11:55:28.864367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:04.076 [2024-11-15 11:55:28.917643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:04.076 [2024-11-15 11:55:28.917694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:04.076 [2024-11-15 11:55:28.917702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:04.076 [2024-11-15 11:55:28.917709] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:04.076 [2024-11-15 11:55:28.917716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:04.076 [2024-11-15 11:55:28.920110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:04.076 [2024-11-15 11:55:28.920276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:04.076 [2024-11-15 11:55:28.920437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:04.076 [2024-11-15 11:55:28.920437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.076 [2024-11-15 11:55:28.999606] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:04.076 [2024-11-15 11:55:29.000675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:04.076 [2024-11-15 11:55:29.000959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:04.076 [2024-11-15 11:55:29.001473] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:04.076 [2024-11-15 11:55:29.001524] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.338 [2024-11-15 11:55:29.637416] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.338 Malloc0 00:32:04.338 [2024-11-15 11:55:29.749767] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1293978 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1293978 /var/tmp/bdevperf.sock 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1293978 ']' 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:04.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:04.338 { 00:32:04.338 "params": { 00:32:04.338 "name": "Nvme$subsystem", 00:32:04.338 "trtype": "$TEST_TRANSPORT", 00:32:04.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:04.338 "adrfam": "ipv4", 00:32:04.338 "trsvcid": "$NVMF_PORT", 00:32:04.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:04.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:04.338 "hdgst": ${hdgst:-false}, 00:32:04.338 "ddgst": ${ddgst:-false} 00:32:04.338 }, 00:32:04.338 "method": "bdev_nvme_attach_controller" 00:32:04.338 } 00:32:04.338 EOF 00:32:04.338 )") 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:04.338 11:55:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:04.338 "params": { 00:32:04.338 "name": "Nvme0", 00:32:04.338 "trtype": "tcp", 00:32:04.338 "traddr": "10.0.0.2", 00:32:04.338 "adrfam": "ipv4", 00:32:04.338 "trsvcid": "4420", 00:32:04.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:04.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:04.338 "hdgst": false, 00:32:04.338 "ddgst": false 00:32:04.338 }, 00:32:04.338 "method": "bdev_nvme_attach_controller" 00:32:04.338 }' 00:32:04.599 [2024-11-15 11:55:29.858921] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:32:04.599 [2024-11-15 11:55:29.858996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1293978 ] 00:32:04.599 [2024-11-15 11:55:29.953557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:04.599 [2024-11-15 11:55:30.007551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.859 Running I/O for 10 seconds... 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.433 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.433 [2024-11-15 11:55:30.777364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.433 [2024-11-15 11:55:30.777861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.433 [2024-11-15 11:55:30.777871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.777987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.777996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.434 [2024-11-15 11:55:30.778542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.434 [2024-11-15 11:55:30.778551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:05.435 [2024-11-15 11:55:30.778558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.435 [2024-11-15 11:55:30.778572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9a5ee0 is same with the state(6) to be set 00:32:05.435 [2024-11-15 11:55:30.779868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:32:05.435 task offset: 104064 on job bdev=Nvme0n1 fails 00:32:05.435 00:32:05.435 Latency(us) 00:32:05.435 [2024-11-15T10:55:30.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:05.435 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:05.435 Job: Nvme0n1 ended in about 0.60 seconds with error 00:32:05.435 Verification LBA range: start 0x0 length 0x400 00:32:05.435 Nvme0n1 : 0.60 1280.28 80.02 106.69 0.00 45084.12 1720.32 36481.71 00:32:05.435 [2024-11-15T10:55:30.933Z] =================================================================================================================== 00:32:05.435 [2024-11-15T10:55:30.933Z] Total : 1280.28 80.02 106.69 0.00 45084.12 1720.32 36481.71 00:32:05.435 [2024-11-15 11:55:30.782110] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:32:05.435 [2024-11-15 11:55:30.782149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78d000 (9): Bad file descriptor 00:32:05.435 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.435 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:32:05.435 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:05.435 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:05.435 [2024-11-15 11:55:30.783885] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:32:05.435 [2024-11-15 11:55:30.783994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:32:05.435 [2024-11-15 11:55:30.784038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:05.435 [2024-11-15 11:55:30.784058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:32:05.435 [2024-11-15 11:55:30.784068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:32:05.435 [2024-11-15 11:55:30.784076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:32:05.435 [2024-11-15 11:55:30.784085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x78d000 00:32:05.435 [2024-11-15 11:55:30.784110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78d000 (9): Bad file descriptor 00:32:05.435 [2024-11-15 11:55:30.784126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:32:05.435 [2024-11-15 11:55:30.784133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:32:05.435 [2024-11-15 11:55:30.784156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:32:05.435 [2024-11-15 11:55:30.784167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:32:05.435 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:05.435 11:55:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1293978 00:32:06.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1293978) - No such process 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:06.378 { 00:32:06.378 "params": { 00:32:06.378 "name": "Nvme$subsystem", 00:32:06.378 "trtype": "$TEST_TRANSPORT", 00:32:06.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:06.378 "adrfam": "ipv4", 00:32:06.378 "trsvcid": "$NVMF_PORT", 00:32:06.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:06.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:06.378 "hdgst": ${hdgst:-false}, 00:32:06.378 "ddgst": ${ddgst:-false} 00:32:06.378 }, 00:32:06.378 "method": "bdev_nvme_attach_controller" 00:32:06.378 } 00:32:06.378 EOF 00:32:06.378 )") 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:32:06.378 11:55:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:06.378 "params": { 00:32:06.378 "name": "Nvme0", 00:32:06.378 "trtype": "tcp", 00:32:06.378 "traddr": "10.0.0.2", 00:32:06.378 "adrfam": "ipv4", 00:32:06.378 "trsvcid": "4420", 00:32:06.378 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:06.378 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:06.378 "hdgst": false, 00:32:06.378 "ddgst": false 00:32:06.378 }, 00:32:06.378 "method": "bdev_nvme_attach_controller" 00:32:06.378 }' 00:32:06.378 [2024-11-15 11:55:31.859974] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:32:06.378 [2024-11-15 11:55:31.860053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1294331 ] 00:32:06.638 [2024-11-15 11:55:31.953824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.638 [2024-11-15 11:55:32.004437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.899 Running I/O for 1 seconds... 00:32:07.840 1773.00 IOPS, 110.81 MiB/s 00:32:07.840 Latency(us) 00:32:07.840 [2024-11-15T10:55:33.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.840 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:32:07.840 Verification LBA range: start 0x0 length 0x400 00:32:07.840 Nvme0n1 : 1.01 1823.39 113.96 0.00 0.00 34374.39 1126.40 37792.43 00:32:07.840 [2024-11-15T10:55:33.338Z] =================================================================================================================== 00:32:07.840 [2024-11-15T10:55:33.338Z] Total : 1823.39 113.96 0.00 0.00 34374.39 1126.40 37792.43 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:08.101 rmmod nvme_tcp 00:32:08.101 rmmod nvme_fabrics 00:32:08.101 rmmod nvme_keyring 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1293611 ']' 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1293611 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1293611 ']' 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1293611 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1293611 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1293611' 00:32:08.101 killing process with pid 1293611 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1293611 00:32:08.101 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1293611 00:32:08.361 [2024-11-15 11:55:33.669983] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:08.361 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:08.361 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:08.361 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:08.361 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:32:08.361 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:32:08.361 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:08.362 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:32:08.362 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:08.362 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:08.362 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:08.362 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:08.362 11:55:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:10.909 00:32:10.909 real 0m14.837s 00:32:10.909 user 0m19.856s 00:32:10.909 sys 0m7.513s 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:10.909 ************************************ 00:32:10.909 END TEST nvmf_host_management 00:32:10.909 ************************************ 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:10.909 ************************************ 00:32:10.909 START TEST nvmf_lvol 00:32:10.909 ************************************ 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:10.909 * Looking for test storage... 00:32:10.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:32:10.909 11:55:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:10.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.909 --rc genhtml_branch_coverage=1 00:32:10.909 --rc genhtml_function_coverage=1 00:32:10.909 --rc genhtml_legend=1 00:32:10.909 --rc geninfo_all_blocks=1 00:32:10.909 --rc geninfo_unexecuted_blocks=1 00:32:10.909 00:32:10.909 ' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:10.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.909 --rc genhtml_branch_coverage=1 00:32:10.909 --rc genhtml_function_coverage=1 00:32:10.909 --rc genhtml_legend=1 00:32:10.909 --rc geninfo_all_blocks=1 00:32:10.909 --rc geninfo_unexecuted_blocks=1 00:32:10.909 00:32:10.909 ' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:10.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.909 --rc genhtml_branch_coverage=1 00:32:10.909 --rc genhtml_function_coverage=1 00:32:10.909 --rc genhtml_legend=1 00:32:10.909 --rc geninfo_all_blocks=1 00:32:10.909 --rc geninfo_unexecuted_blocks=1 00:32:10.909 00:32:10.909 ' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:10.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.909 --rc genhtml_branch_coverage=1 00:32:10.909 --rc genhtml_function_coverage=1 00:32:10.909 --rc genhtml_legend=1 00:32:10.909 --rc geninfo_all_blocks=1 00:32:10.909 --rc geninfo_unexecuted_blocks=1 00:32:10.909 00:32:10.909 ' 00:32:10.909 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.910 11:55:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:19.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:19.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:19.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:19.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:19.056 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:19.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:19.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:32:19.057 00:32:19.057 --- 10.0.0.2 ping statistics --- 00:32:19.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.057 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:19.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:19.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:32:19.057 00:32:19.057 --- 10.0.0.1 ping statistics --- 00:32:19.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:19.057 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1298673 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1298673 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1298673 ']' 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:19.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:19.057 11:55:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:19.057 [2024-11-15 11:55:43.669834] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:19.057 [2024-11-15 11:55:43.670979] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:32:19.057 [2024-11-15 11:55:43.671031] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:19.057 [2024-11-15 11:55:43.770984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:19.057 [2024-11-15 11:55:43.823855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:19.057 [2024-11-15 11:55:43.823905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:19.057 [2024-11-15 11:55:43.823915] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:19.057 [2024-11-15 11:55:43.823923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:19.057 [2024-11-15 11:55:43.823934] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:19.057 [2024-11-15 11:55:43.825711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:19.057 [2024-11-15 11:55:43.825871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.057 [2024-11-15 11:55:43.825872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:19.057 [2024-11-15 11:55:43.903496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:19.057 [2024-11-15 11:55:43.904542] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:19.057 [2024-11-15 11:55:43.904715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:19.057 [2024-11-15 11:55:43.904906] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:19.057 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:19.318 [2024-11-15 11:55:44.694777] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:19.318 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:19.578 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:19.578 11:55:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:19.839 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:19.839 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:20.101 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:20.101 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=70064738-55af-402e-ac14-3e005ce22439 00:32:20.101 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 70064738-55af-402e-ac14-3e005ce22439 lvol 20 00:32:20.363 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=9b46f32e-90b0-49bf-aeb4-c9b5a60bd3c0 00:32:20.363 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:20.625 11:55:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9b46f32e-90b0-49bf-aeb4-c9b5a60bd3c0 00:32:20.625 11:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:20.885 [2024-11-15 11:55:46.258697] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.885 11:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:21.146 11:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1299368 00:32:21.146 11:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:21.146 11:55:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:22.090 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 9b46f32e-90b0-49bf-aeb4-c9b5a60bd3c0 MY_SNAPSHOT 00:32:22.352 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=25196e3d-cf23-4527-928c-a11619bb420a 00:32:22.352 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 9b46f32e-90b0-49bf-aeb4-c9b5a60bd3c0 30 00:32:22.614 11:55:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 25196e3d-cf23-4527-928c-a11619bb420a MY_CLONE 00:32:22.875 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=28bcb127-2656-470f-b80b-bd1c0b502f19 00:32:22.875 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 28bcb127-2656-470f-b80b-bd1c0b502f19 00:32:23.136 11:55:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1299368 00:32:33.139 Initializing NVMe Controllers 00:32:33.139 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:33.139 Controller IO queue size 128, less than required. 00:32:33.139 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:33.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:33.139 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:33.139 Initialization complete. Launching workers. 00:32:33.139 ======================================================== 00:32:33.139 Latency(us) 00:32:33.139 Device Information : IOPS MiB/s Average min max 00:32:33.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 15080.30 58.91 8490.50 1920.76 77776.57 00:32:33.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15336.10 59.91 8347.05 4333.32 70601.27 00:32:33.139 ======================================================== 00:32:33.139 Total : 30416.40 118.81 8418.17 1920.76 77776.57 00:32:33.139 00:32:33.139 11:55:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9b46f32e-90b0-49bf-aeb4-c9b5a60bd3c0 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 70064738-55af-402e-ac14-3e005ce22439 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:33.139 rmmod nvme_tcp 00:32:33.139 rmmod nvme_fabrics 00:32:33.139 rmmod nvme_keyring 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1298673 ']' 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1298673 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1298673 ']' 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1298673 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1298673 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:33.139 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1298673' 00:32:33.140 killing process with pid 1298673 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1298673 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1298673 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.140 11:55:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.525 00:32:34.525 real 0m23.913s 00:32:34.525 user 0m56.143s 00:32:34.525 sys 0m10.732s 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:34.525 ************************************ 00:32:34.525 END TEST nvmf_lvol 00:32:34.525 ************************************ 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:34.525 ************************************ 00:32:34.525 START TEST nvmf_lvs_grow 00:32:34.525 ************************************ 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:34.525 * Looking for test storage... 00:32:34.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:32:34.525 11:55:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:34.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.806 --rc genhtml_branch_coverage=1 00:32:34.806 --rc genhtml_function_coverage=1 00:32:34.806 --rc genhtml_legend=1 00:32:34.806 --rc geninfo_all_blocks=1 00:32:34.806 --rc geninfo_unexecuted_blocks=1 00:32:34.806 00:32:34.806 ' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:34.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.806 --rc genhtml_branch_coverage=1 00:32:34.806 --rc genhtml_function_coverage=1 00:32:34.806 --rc genhtml_legend=1 00:32:34.806 --rc geninfo_all_blocks=1 00:32:34.806 --rc geninfo_unexecuted_blocks=1 00:32:34.806 00:32:34.806 ' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:34.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.806 --rc genhtml_branch_coverage=1 00:32:34.806 --rc genhtml_function_coverage=1 00:32:34.806 --rc genhtml_legend=1 00:32:34.806 --rc geninfo_all_blocks=1 00:32:34.806 --rc geninfo_unexecuted_blocks=1 00:32:34.806 00:32:34.806 ' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:34.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.806 --rc genhtml_branch_coverage=1 00:32:34.806 --rc genhtml_function_coverage=1 00:32:34.806 --rc genhtml_legend=1 00:32:34.806 --rc geninfo_all_blocks=1 00:32:34.806 --rc geninfo_unexecuted_blocks=1 00:32:34.806 00:32:34.806 ' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.806 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.807 11:56:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:43.048 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:43.048 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:43.048 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:43.048 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:32:43.048 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:43.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:43.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.536 ms 00:32:43.049 00:32:43.049 --- 10.0.0.2 ping statistics --- 00:32:43.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.049 rtt min/avg/max/mdev = 0.536/0.536/0.536/0.000 ms 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:43.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:43.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:32:43.049 00:32:43.049 --- 10.0.0.1 ping statistics --- 00:32:43.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:43.049 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1305597 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1305597 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1305597 ']' 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:43.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:43.049 11:56:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:43.049 [2024-11-15 11:56:07.655651] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:43.049 [2024-11-15 11:56:07.656788] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:32:43.049 [2024-11-15 11:56:07.656840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:43.049 [2024-11-15 11:56:07.755486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.049 [2024-11-15 11:56:07.806520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:43.049 [2024-11-15 11:56:07.806582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:43.049 [2024-11-15 11:56:07.806590] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:43.049 [2024-11-15 11:56:07.806598] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:43.049 [2024-11-15 11:56:07.806604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:43.049 [2024-11-15 11:56:07.807365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.049 [2024-11-15 11:56:07.885296] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:43.049 [2024-11-15 11:56:07.885600] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.049 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:43.310 [2024-11-15 11:56:08.672237] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:43.311 ************************************ 00:32:43.311 START TEST lvs_grow_clean 00:32:43.311 ************************************ 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:43.311 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:43.571 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:43.571 11:56:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:43.833 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:43.833 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:43.833 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:44.094 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:44.094 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:44.094 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 86f0258e-bd67-4198-b79d-7a8d41549f3f lvol 150 00:32:44.356 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7722450d-962e-484a-8092-b3441ce41f3a 00:32:44.356 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:44.356 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:44.356 [2024-11-15 11:56:09.763935] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:44.356 [2024-11-15 11:56:09.764103] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:44.356 true 00:32:44.356 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:44.356 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:44.618 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:44.618 11:56:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:44.879 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7722450d-962e-484a-8092-b3441ce41f3a 00:32:44.879 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:45.139 [2024-11-15 11:56:10.484613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:45.139 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1306118 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1306118 /var/tmp/bdevperf.sock 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1306118 ']' 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:45.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:45.400 11:56:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:45.400 [2024-11-15 11:56:10.722731] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:32:45.400 [2024-11-15 11:56:10.722801] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306118 ] 00:32:45.400 [2024-11-15 11:56:10.816189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:45.400 [2024-11-15 11:56:10.869003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.340 11:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:46.340 11:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:32:46.341 11:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:46.602 Nvme0n1 00:32:46.602 11:56:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:46.602 [ 00:32:46.602 { 00:32:46.602 "name": "Nvme0n1", 00:32:46.602 "aliases": [ 00:32:46.602 "7722450d-962e-484a-8092-b3441ce41f3a" 00:32:46.602 ], 00:32:46.602 "product_name": "NVMe disk", 00:32:46.602 "block_size": 4096, 00:32:46.602 "num_blocks": 38912, 00:32:46.602 "uuid": "7722450d-962e-484a-8092-b3441ce41f3a", 00:32:46.602 "numa_id": 0, 00:32:46.602 "assigned_rate_limits": { 00:32:46.602 "rw_ios_per_sec": 0, 00:32:46.602 "rw_mbytes_per_sec": 0, 00:32:46.602 "r_mbytes_per_sec": 0, 00:32:46.602 "w_mbytes_per_sec": 0 00:32:46.602 }, 00:32:46.602 "claimed": false, 00:32:46.602 "zoned": false, 00:32:46.602 "supported_io_types": { 00:32:46.602 "read": true, 00:32:46.602 "write": true, 00:32:46.602 "unmap": true, 00:32:46.602 "flush": true, 00:32:46.602 "reset": true, 00:32:46.602 "nvme_admin": true, 00:32:46.602 "nvme_io": true, 00:32:46.602 "nvme_io_md": false, 00:32:46.602 "write_zeroes": true, 00:32:46.602 "zcopy": false, 00:32:46.602 "get_zone_info": false, 00:32:46.602 "zone_management": false, 00:32:46.602 "zone_append": false, 00:32:46.602 "compare": true, 00:32:46.602 "compare_and_write": true, 00:32:46.602 "abort": true, 00:32:46.602 "seek_hole": false, 00:32:46.602 "seek_data": false, 00:32:46.602 "copy": true, 00:32:46.602 "nvme_iov_md": false 00:32:46.602 }, 00:32:46.602 "memory_domains": [ 00:32:46.602 { 00:32:46.602 "dma_device_id": "system", 00:32:46.602 "dma_device_type": 1 00:32:46.602 } 00:32:46.602 ], 00:32:46.602 "driver_specific": { 00:32:46.602 "nvme": [ 00:32:46.602 { 00:32:46.602 "trid": { 00:32:46.602 "trtype": "TCP", 00:32:46.602 "adrfam": "IPv4", 00:32:46.602 "traddr": "10.0.0.2", 00:32:46.602 "trsvcid": "4420", 00:32:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:46.602 }, 00:32:46.602 "ctrlr_data": { 00:32:46.602 "cntlid": 1, 00:32:46.602 "vendor_id": "0x8086", 00:32:46.602 "model_number": "SPDK bdev Controller", 00:32:46.602 "serial_number": "SPDK0", 00:32:46.602 "firmware_revision": "25.01", 00:32:46.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:46.602 "oacs": { 00:32:46.602 "security": 0, 00:32:46.602 "format": 0, 00:32:46.602 "firmware": 0, 00:32:46.602 "ns_manage": 0 00:32:46.602 }, 00:32:46.602 "multi_ctrlr": true, 00:32:46.602 "ana_reporting": false 00:32:46.602 }, 00:32:46.602 "vs": { 00:32:46.602 "nvme_version": "1.3" 00:32:46.602 }, 00:32:46.602 "ns_data": { 00:32:46.602 "id": 1, 00:32:46.602 "can_share": true 00:32:46.602 } 00:32:46.602 } 00:32:46.602 ], 00:32:46.602 "mp_policy": "active_passive" 00:32:46.602 } 00:32:46.602 } 00:32:46.602 ] 00:32:46.602 11:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1306454 00:32:46.602 11:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:46.602 11:56:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:46.862 Running I/O for 10 seconds... 00:32:47.805 Latency(us) 00:32:47.805 [2024-11-15T10:56:13.303Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:47.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:47.805 Nvme0n1 : 1.00 16637.00 64.99 0.00 0.00 0.00 0.00 0.00 00:32:47.805 [2024-11-15T10:56:13.303Z] =================================================================================================================== 00:32:47.805 [2024-11-15T10:56:13.303Z] Total : 16637.00 64.99 0.00 0.00 0.00 0.00 0.00 00:32:47.805 00:32:48.761 11:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:48.761 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:48.761 Nvme0n1 : 2.00 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:32:48.761 [2024-11-15T10:56:14.259Z] =================================================================================================================== 00:32:48.761 [2024-11-15T10:56:14.259Z] Total : 16891.00 65.98 0.00 0.00 0.00 0.00 0.00 00:32:48.761 00:32:49.021 true 00:32:49.021 11:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:49.021 11:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:49.021 11:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:49.021 11:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:49.021 11:56:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1306454 00:32:49.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:49.964 Nvme0n1 : 3.00 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:32:49.964 [2024-11-15T10:56:15.462Z] =================================================================================================================== 00:32:49.964 [2024-11-15T10:56:15.462Z] Total : 17145.00 66.97 0.00 0.00 0.00 0.00 0.00 00:32:49.964 00:32:50.907 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:50.907 Nvme0n1 : 4.00 17684.75 69.08 0.00 0.00 0.00 0.00 0.00 00:32:50.907 [2024-11-15T10:56:16.405Z] =================================================================================================================== 00:32:50.907 [2024-11-15T10:56:16.405Z] Total : 17684.75 69.08 0.00 0.00 0.00 0.00 0.00 00:32:50.907 00:32:51.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.849 Nvme0n1 : 5.00 19180.40 74.92 0.00 0.00 0.00 0.00 0.00 00:32:51.849 [2024-11-15T10:56:17.347Z] =================================================================================================================== 00:32:51.849 [2024-11-15T10:56:17.347Z] Total : 19180.40 74.92 0.00 0.00 0.00 0.00 0.00 00:32:51.849 00:32:52.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.789 Nvme0n1 : 6.00 20195.83 78.89 0.00 0.00 0.00 0.00 0.00 00:32:52.789 [2024-11-15T10:56:18.287Z] =================================================================================================================== 00:32:52.789 [2024-11-15T10:56:18.287Z] Total : 20195.83 78.89 0.00 0.00 0.00 0.00 0.00 00:32:52.789 00:32:53.729 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.729 Nvme0n1 : 7.00 20912.14 81.69 0.00 0.00 0.00 0.00 0.00 00:32:53.729 [2024-11-15T10:56:19.227Z] =================================================================================================================== 00:32:53.729 [2024-11-15T10:56:19.227Z] Total : 20912.14 81.69 0.00 0.00 0.00 0.00 0.00 00:32:53.729 00:32:55.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.113 Nvme0n1 : 8.00 21455.50 83.81 0.00 0.00 0.00 0.00 0.00 00:32:55.113 [2024-11-15T10:56:20.611Z] =================================================================================================================== 00:32:55.113 [2024-11-15T10:56:20.611Z] Total : 21455.50 83.81 0.00 0.00 0.00 0.00 0.00 00:32:55.113 00:32:56.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.052 Nvme0n1 : 9.00 21879.67 85.47 0.00 0.00 0.00 0.00 0.00 00:32:56.052 [2024-11-15T10:56:21.550Z] =================================================================================================================== 00:32:56.052 [2024-11-15T10:56:21.550Z] Total : 21879.67 85.47 0.00 0.00 0.00 0.00 0.00 00:32:56.052 00:32:56.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.991 Nvme0n1 : 10.00 22219.00 86.79 0.00 0.00 0.00 0.00 0.00 00:32:56.991 [2024-11-15T10:56:22.489Z] =================================================================================================================== 00:32:56.991 [2024-11-15T10:56:22.489Z] Total : 22219.00 86.79 0.00 0.00 0.00 0.00 0.00 00:32:56.991 00:32:56.991 00:32:56.991 Latency(us) 00:32:56.991 [2024-11-15T10:56:22.489Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.991 Nvme0n1 : 10.00 22221.97 86.80 0.00 0.00 5756.64 2921.81 32331.09 00:32:56.991 [2024-11-15T10:56:22.489Z] =================================================================================================================== 00:32:56.991 [2024-11-15T10:56:22.489Z] Total : 22221.97 86.80 0.00 0.00 5756.64 2921.81 32331.09 00:32:56.991 { 00:32:56.991 "results": [ 00:32:56.991 { 00:32:56.991 "job": "Nvme0n1", 00:32:56.991 "core_mask": "0x2", 00:32:56.991 "workload": "randwrite", 00:32:56.991 "status": "finished", 00:32:56.991 "queue_depth": 128, 00:32:56.991 "io_size": 4096, 00:32:56.991 "runtime": 10.004424, 00:32:56.991 "iops": 22221.969000913996, 00:32:56.991 "mibps": 86.8045664098203, 00:32:56.992 "io_failed": 0, 00:32:56.992 "io_timeout": 0, 00:32:56.992 "avg_latency_us": 5756.644472152502, 00:32:56.992 "min_latency_us": 2921.8133333333335, 00:32:56.992 "max_latency_us": 32331.093333333334 00:32:56.992 } 00:32:56.992 ], 00:32:56.992 "core_count": 1 00:32:56.992 } 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1306118 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1306118 ']' 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1306118 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1306118 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1306118' 00:32:56.992 killing process with pid 1306118 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1306118 00:32:56.992 Received shutdown signal, test time was about 10.000000 seconds 00:32:56.992 00:32:56.992 Latency(us) 00:32:56.992 [2024-11-15T10:56:22.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.992 [2024-11-15T10:56:22.490Z] =================================================================================================================== 00:32:56.992 [2024-11-15T10:56:22.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1306118 00:32:56.992 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:57.252 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.511 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:57.511 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:57.511 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:57.511 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:57.511 11:56:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:57.771 [2024-11-15 11:56:23.092001] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:57.771 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:58.031 request: 00:32:58.031 { 00:32:58.031 "uuid": "86f0258e-bd67-4198-b79d-7a8d41549f3f", 00:32:58.032 "method": "bdev_lvol_get_lvstores", 00:32:58.032 "req_id": 1 00:32:58.032 } 00:32:58.032 Got JSON-RPC error response 00:32:58.032 response: 00:32:58.032 { 00:32:58.032 "code": -19, 00:32:58.032 "message": "No such device" 00:32:58.032 } 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:58.032 aio_bdev 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7722450d-962e-484a-8092-b3441ce41f3a 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=7722450d-962e-484a-8092-b3441ce41f3a 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:58.032 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:58.292 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7722450d-962e-484a-8092-b3441ce41f3a -t 2000 00:32:58.553 [ 00:32:58.553 { 00:32:58.553 "name": "7722450d-962e-484a-8092-b3441ce41f3a", 00:32:58.553 "aliases": [ 00:32:58.553 "lvs/lvol" 00:32:58.553 ], 00:32:58.553 "product_name": "Logical Volume", 00:32:58.553 "block_size": 4096, 00:32:58.553 "num_blocks": 38912, 00:32:58.553 "uuid": "7722450d-962e-484a-8092-b3441ce41f3a", 00:32:58.553 "assigned_rate_limits": { 00:32:58.553 "rw_ios_per_sec": 0, 00:32:58.553 "rw_mbytes_per_sec": 0, 00:32:58.553 "r_mbytes_per_sec": 0, 00:32:58.553 "w_mbytes_per_sec": 0 00:32:58.553 }, 00:32:58.553 "claimed": false, 00:32:58.553 "zoned": false, 00:32:58.553 "supported_io_types": { 00:32:58.553 "read": true, 00:32:58.553 "write": true, 00:32:58.553 "unmap": true, 00:32:58.553 "flush": false, 00:32:58.553 "reset": true, 00:32:58.553 "nvme_admin": false, 00:32:58.553 "nvme_io": false, 00:32:58.553 "nvme_io_md": false, 00:32:58.553 "write_zeroes": true, 00:32:58.553 "zcopy": false, 00:32:58.553 "get_zone_info": false, 00:32:58.553 "zone_management": false, 00:32:58.553 "zone_append": false, 00:32:58.553 "compare": false, 00:32:58.553 "compare_and_write": false, 00:32:58.553 "abort": false, 00:32:58.553 "seek_hole": true, 00:32:58.553 "seek_data": true, 00:32:58.553 "copy": false, 00:32:58.553 "nvme_iov_md": false 00:32:58.553 }, 00:32:58.553 "driver_specific": { 00:32:58.553 "lvol": { 00:32:58.553 "lvol_store_uuid": "86f0258e-bd67-4198-b79d-7a8d41549f3f", 00:32:58.553 "base_bdev": "aio_bdev", 00:32:58.553 "thin_provision": false, 00:32:58.553 "num_allocated_clusters": 38, 00:32:58.553 "snapshot": false, 00:32:58.553 "clone": false, 00:32:58.553 "esnap_clone": false 00:32:58.553 } 00:32:58.553 } 00:32:58.553 } 00:32:58.554 ] 00:32:58.554 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:58.554 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:58.554 11:56:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:58.554 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:58.554 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:58.554 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:58.813 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:58.813 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7722450d-962e-484a-8092-b3441ce41f3a 00:32:59.073 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 86f0258e-bd67-4198-b79d-7a8d41549f3f 00:32:59.073 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:59.333 00:32:59.333 real 0m16.021s 00:32:59.333 user 0m15.659s 00:32:59.333 sys 0m1.473s 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:59.333 ************************************ 00:32:59.333 END TEST lvs_grow_clean 00:32:59.333 ************************************ 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:59.333 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:59.594 ************************************ 00:32:59.594 START TEST lvs_grow_dirty 00:32:59.594 ************************************ 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:59.594 11:56:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:59.594 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:59.594 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:59.855 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:32:59.855 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:32:59.855 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:33:00.115 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:33:00.115 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:33:00.115 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 lvol 150 00:33:00.115 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:00.115 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:00.115 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:33:00.375 [2024-11-15 11:56:25.747936] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:33:00.375 [2024-11-15 11:56:25.748103] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:33:00.375 true 00:33:00.375 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:00.375 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:33:00.636 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:33:00.636 11:56:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:33:00.896 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:00.896 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:01.156 [2024-11-15 11:56:26.492486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:01.156 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1309193 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1309193 /var/tmp/bdevperf.sock 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1309193 ']' 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:01.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:01.416 11:56:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.416 [2024-11-15 11:56:26.745513] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:01.416 [2024-11-15 11:56:26.745588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1309193 ] 00:33:01.416 [2024-11-15 11:56:26.834814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.416 [2024-11-15 11:56:26.868827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:02.357 11:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:02.357 11:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:33:02.357 11:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:33:02.617 Nvme0n1 00:33:02.617 11:56:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:33:02.617 [ 00:33:02.617 { 00:33:02.617 "name": "Nvme0n1", 00:33:02.617 "aliases": [ 00:33:02.617 "32f7402b-0577-4d5b-a9b3-1b6f381230b7" 00:33:02.617 ], 00:33:02.617 "product_name": "NVMe disk", 00:33:02.617 "block_size": 4096, 00:33:02.617 "num_blocks": 38912, 00:33:02.617 "uuid": "32f7402b-0577-4d5b-a9b3-1b6f381230b7", 00:33:02.617 "numa_id": 0, 00:33:02.617 "assigned_rate_limits": { 00:33:02.617 "rw_ios_per_sec": 0, 00:33:02.617 "rw_mbytes_per_sec": 0, 00:33:02.617 "r_mbytes_per_sec": 0, 00:33:02.617 "w_mbytes_per_sec": 0 00:33:02.617 }, 00:33:02.617 "claimed": false, 00:33:02.617 "zoned": false, 00:33:02.617 "supported_io_types": { 00:33:02.617 "read": true, 00:33:02.617 "write": true, 00:33:02.617 "unmap": true, 00:33:02.617 "flush": true, 00:33:02.617 "reset": true, 00:33:02.617 "nvme_admin": true, 00:33:02.617 "nvme_io": true, 00:33:02.617 "nvme_io_md": false, 00:33:02.617 "write_zeroes": true, 00:33:02.617 "zcopy": false, 00:33:02.617 "get_zone_info": false, 00:33:02.617 "zone_management": false, 00:33:02.617 "zone_append": false, 00:33:02.617 "compare": true, 00:33:02.617 "compare_and_write": true, 00:33:02.617 "abort": true, 00:33:02.617 "seek_hole": false, 00:33:02.617 "seek_data": false, 00:33:02.617 "copy": true, 00:33:02.617 "nvme_iov_md": false 00:33:02.617 }, 00:33:02.617 "memory_domains": [ 00:33:02.617 { 00:33:02.617 "dma_device_id": "system", 00:33:02.617 "dma_device_type": 1 00:33:02.617 } 00:33:02.617 ], 00:33:02.617 "driver_specific": { 00:33:02.617 "nvme": [ 00:33:02.617 { 00:33:02.617 "trid": { 00:33:02.617 "trtype": "TCP", 00:33:02.617 "adrfam": "IPv4", 00:33:02.617 "traddr": "10.0.0.2", 00:33:02.617 "trsvcid": "4420", 00:33:02.617 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:33:02.617 }, 00:33:02.617 "ctrlr_data": { 00:33:02.617 "cntlid": 1, 00:33:02.617 "vendor_id": "0x8086", 00:33:02.617 "model_number": "SPDK bdev Controller", 00:33:02.617 "serial_number": "SPDK0", 00:33:02.617 "firmware_revision": "25.01", 00:33:02.617 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:02.617 "oacs": { 00:33:02.617 "security": 0, 00:33:02.617 "format": 0, 00:33:02.617 "firmware": 0, 00:33:02.617 "ns_manage": 0 00:33:02.617 }, 00:33:02.617 "multi_ctrlr": true, 00:33:02.617 "ana_reporting": false 00:33:02.617 }, 00:33:02.617 "vs": { 00:33:02.617 "nvme_version": "1.3" 00:33:02.617 }, 00:33:02.617 "ns_data": { 00:33:02.617 "id": 1, 00:33:02.618 "can_share": true 00:33:02.618 } 00:33:02.618 } 00:33:02.618 ], 00:33:02.618 "mp_policy": "active_passive" 00:33:02.618 } 00:33:02.618 } 00:33:02.618 ] 00:33:02.618 11:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:02.618 11:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1309529 00:33:02.618 11:56:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:33:02.878 Running I/O for 10 seconds... 00:33:03.818 Latency(us) 00:33:03.818 [2024-11-15T10:56:29.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:03.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:03.818 Nvme0n1 : 1.00 17336.00 67.72 0.00 0.00 0.00 0.00 0.00 00:33:03.818 [2024-11-15T10:56:29.316Z] =================================================================================================================== 00:33:03.818 [2024-11-15T10:56:29.316Z] Total : 17336.00 67.72 0.00 0.00 0.00 0.00 0.00 00:33:03.818 00:33:04.759 11:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:04.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:04.759 Nvme0n1 : 2.00 17589.50 68.71 0.00 0.00 0.00 0.00 0.00 00:33:04.759 [2024-11-15T10:56:30.257Z] =================================================================================================================== 00:33:04.759 [2024-11-15T10:56:30.257Z] Total : 17589.50 68.71 0.00 0.00 0.00 0.00 0.00 00:33:04.759 00:33:05.019 true 00:33:05.019 11:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:05.019 11:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:33:05.019 11:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:33:05.019 11:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:33:05.019 11:56:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1309529 00:33:05.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:05.962 Nvme0n1 : 3.00 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:33:05.962 [2024-11-15T10:56:31.460Z] =================================================================================================================== 00:33:05.962 [2024-11-15T10:56:31.460Z] Total : 17653.00 68.96 0.00 0.00 0.00 0.00 0.00 00:33:05.962 00:33:06.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:06.904 Nvme0n1 : 4.00 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:33:06.904 [2024-11-15T10:56:32.402Z] =================================================================================================================== 00:33:06.904 [2024-11-15T10:56:32.402Z] Total : 17716.50 69.21 0.00 0.00 0.00 0.00 0.00 00:33:06.904 00:33:07.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:07.843 Nvme0n1 : 5.00 18618.20 72.73 0.00 0.00 0.00 0.00 0.00 00:33:07.843 [2024-11-15T10:56:33.341Z] =================================================================================================================== 00:33:07.843 [2024-11-15T10:56:33.341Z] Total : 18618.20 72.73 0.00 0.00 0.00 0.00 0.00 00:33:07.843 00:33:08.785 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:08.785 Nvme0n1 : 6.00 19727.33 77.06 0.00 0.00 0.00 0.00 0.00 00:33:08.785 [2024-11-15T10:56:34.283Z] =================================================================================================================== 00:33:08.785 [2024-11-15T10:56:34.283Z] Total : 19727.33 77.06 0.00 0.00 0.00 0.00 0.00 00:33:08.785 00:33:09.725 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:09.725 Nvme0n1 : 7.00 20519.57 80.15 0.00 0.00 0.00 0.00 0.00 00:33:09.725 [2024-11-15T10:56:35.223Z] =================================================================================================================== 00:33:09.725 [2024-11-15T10:56:35.223Z] Total : 20519.57 80.15 0.00 0.00 0.00 0.00 0.00 00:33:09.725 00:33:11.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:11.115 Nvme0n1 : 8.00 21113.75 82.48 0.00 0.00 0.00 0.00 0.00 00:33:11.115 [2024-11-15T10:56:36.613Z] =================================================================================================================== 00:33:11.115 [2024-11-15T10:56:36.613Z] Total : 21113.75 82.48 0.00 0.00 0.00 0.00 0.00 00:33:11.115 00:33:12.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.052 Nvme0n1 : 9.00 21575.89 84.28 0.00 0.00 0.00 0.00 0.00 00:33:12.052 [2024-11-15T10:56:37.550Z] =================================================================================================================== 00:33:12.052 [2024-11-15T10:56:37.550Z] Total : 21575.89 84.28 0.00 0.00 0.00 0.00 0.00 00:33:12.052 00:33:12.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.992 Nvme0n1 : 10.00 21952.00 85.75 0.00 0.00 0.00 0.00 0.00 00:33:12.992 [2024-11-15T10:56:38.490Z] =================================================================================================================== 00:33:12.992 [2024-11-15T10:56:38.490Z] Total : 21952.00 85.75 0.00 0.00 0.00 0.00 0.00 00:33:12.992 00:33:12.992 00:33:12.992 Latency(us) 00:33:12.992 [2024-11-15T10:56:38.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.992 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:12.992 Nvme0n1 : 10.00 21951.96 85.75 0.00 0.00 5827.84 4724.05 30583.47 00:33:12.992 [2024-11-15T10:56:38.490Z] =================================================================================================================== 00:33:12.992 [2024-11-15T10:56:38.490Z] Total : 21951.96 85.75 0.00 0.00 5827.84 4724.05 30583.47 00:33:12.992 { 00:33:12.992 "results": [ 00:33:12.992 { 00:33:12.992 "job": "Nvme0n1", 00:33:12.992 "core_mask": "0x2", 00:33:12.992 "workload": "randwrite", 00:33:12.992 "status": "finished", 00:33:12.992 "queue_depth": 128, 00:33:12.992 "io_size": 4096, 00:33:12.992 "runtime": 10.002934, 00:33:12.992 "iops": 21951.959295142806, 00:33:12.992 "mibps": 85.74984099665159, 00:33:12.992 "io_failed": 0, 00:33:12.992 "io_timeout": 0, 00:33:12.992 "avg_latency_us": 5827.843730690761, 00:33:12.992 "min_latency_us": 4724.053333333333, 00:33:12.992 "max_latency_us": 30583.466666666667 00:33:12.992 } 00:33:12.992 ], 00:33:12.992 "core_count": 1 00:33:12.992 } 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1309193 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 1309193 ']' 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 1309193 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1309193 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1309193' 00:33:12.992 killing process with pid 1309193 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 1309193 00:33:12.992 Received shutdown signal, test time was about 10.000000 seconds 00:33:12.992 00:33:12.992 Latency(us) 00:33:12.992 [2024-11-15T10:56:38.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.992 [2024-11-15T10:56:38.490Z] =================================================================================================================== 00:33:12.992 [2024-11-15T10:56:38.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 1309193 00:33:12.992 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:13.251 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1305597 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1305597 00:33:13.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1305597 Killed "${NVMF_APP[@]}" "$@" 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1311548 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1311548 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 1311548 ']' 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:13.512 11:56:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:13.772 [2024-11-15 11:56:39.025040] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:13.772 [2024-11-15 11:56:39.026025] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:13.772 [2024-11-15 11:56:39.026069] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:13.773 [2024-11-15 11:56:39.118531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.773 [2024-11-15 11:56:39.148163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:13.773 [2024-11-15 11:56:39.148193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:13.773 [2024-11-15 11:56:39.148199] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:13.773 [2024-11-15 11:56:39.148203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:13.773 [2024-11-15 11:56:39.148208] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:13.773 [2024-11-15 11:56:39.148680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:13.773 [2024-11-15 11:56:39.199694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:13.773 [2024-11-15 11:56:39.199883] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:14.343 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:14.343 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:33:14.343 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.343 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:14.343 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:14.604 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.604 11:56:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:14.604 [2024-11-15 11:56:40.051287] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:14.604 [2024-11-15 11:56:40.051538] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:14.604 [2024-11-15 11:56:40.051646] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:14.604 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:14.865 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32f7402b-0577-4d5b-a9b3-1b6f381230b7 -t 2000 00:33:15.126 [ 00:33:15.126 { 00:33:15.126 "name": "32f7402b-0577-4d5b-a9b3-1b6f381230b7", 00:33:15.126 "aliases": [ 00:33:15.126 "lvs/lvol" 00:33:15.126 ], 00:33:15.126 "product_name": "Logical Volume", 00:33:15.126 "block_size": 4096, 00:33:15.126 "num_blocks": 38912, 00:33:15.126 "uuid": "32f7402b-0577-4d5b-a9b3-1b6f381230b7", 00:33:15.126 "assigned_rate_limits": { 00:33:15.126 "rw_ios_per_sec": 0, 00:33:15.126 "rw_mbytes_per_sec": 0, 00:33:15.126 "r_mbytes_per_sec": 0, 00:33:15.126 "w_mbytes_per_sec": 0 00:33:15.126 }, 00:33:15.126 "claimed": false, 00:33:15.126 "zoned": false, 00:33:15.126 "supported_io_types": { 00:33:15.126 "read": true, 00:33:15.126 "write": true, 00:33:15.126 "unmap": true, 00:33:15.126 "flush": false, 00:33:15.126 "reset": true, 00:33:15.126 "nvme_admin": false, 00:33:15.126 "nvme_io": false, 00:33:15.126 "nvme_io_md": false, 00:33:15.126 "write_zeroes": true, 00:33:15.126 "zcopy": false, 00:33:15.126 "get_zone_info": false, 00:33:15.126 "zone_management": false, 00:33:15.126 "zone_append": false, 00:33:15.126 "compare": false, 00:33:15.126 "compare_and_write": false, 00:33:15.126 "abort": false, 00:33:15.126 "seek_hole": true, 00:33:15.126 "seek_data": true, 00:33:15.126 "copy": false, 00:33:15.126 "nvme_iov_md": false 00:33:15.126 }, 00:33:15.126 "driver_specific": { 00:33:15.126 "lvol": { 00:33:15.126 "lvol_store_uuid": "ea28b2e9-8ab2-4da5-81de-dd86e4953926", 00:33:15.126 "base_bdev": "aio_bdev", 00:33:15.126 "thin_provision": false, 00:33:15.126 "num_allocated_clusters": 38, 00:33:15.126 "snapshot": false, 00:33:15.126 "clone": false, 00:33:15.126 "esnap_clone": false 00:33:15.126 } 00:33:15.126 } 00:33:15.126 } 00:33:15.126 ] 00:33:15.126 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:33:15.126 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:15.126 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:15.386 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:15.386 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:15.386 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:15.386 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:15.387 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:15.646 [2024-11-15 11:56:40.949158] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:15.646 11:56:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:15.907 request: 00:33:15.907 { 00:33:15.907 "uuid": "ea28b2e9-8ab2-4da5-81de-dd86e4953926", 00:33:15.907 "method": "bdev_lvol_get_lvstores", 00:33:15.907 "req_id": 1 00:33:15.907 } 00:33:15.907 Got JSON-RPC error response 00:33:15.907 response: 00:33:15.907 { 00:33:15.907 "code": -19, 00:33:15.907 "message": "No such device" 00:33:15.907 } 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:15.907 aio_bdev 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:33:15.907 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:16.168 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 32f7402b-0577-4d5b-a9b3-1b6f381230b7 -t 2000 00:33:16.428 [ 00:33:16.428 { 00:33:16.428 "name": "32f7402b-0577-4d5b-a9b3-1b6f381230b7", 00:33:16.428 "aliases": [ 00:33:16.428 "lvs/lvol" 00:33:16.428 ], 00:33:16.428 "product_name": "Logical Volume", 00:33:16.428 "block_size": 4096, 00:33:16.428 "num_blocks": 38912, 00:33:16.428 "uuid": "32f7402b-0577-4d5b-a9b3-1b6f381230b7", 00:33:16.428 "assigned_rate_limits": { 00:33:16.428 "rw_ios_per_sec": 0, 00:33:16.428 "rw_mbytes_per_sec": 0, 00:33:16.428 "r_mbytes_per_sec": 0, 00:33:16.428 "w_mbytes_per_sec": 0 00:33:16.428 }, 00:33:16.428 "claimed": false, 00:33:16.428 "zoned": false, 00:33:16.428 "supported_io_types": { 00:33:16.428 "read": true, 00:33:16.428 "write": true, 00:33:16.428 "unmap": true, 00:33:16.428 "flush": false, 00:33:16.428 "reset": true, 00:33:16.428 "nvme_admin": false, 00:33:16.428 "nvme_io": false, 00:33:16.428 "nvme_io_md": false, 00:33:16.428 "write_zeroes": true, 00:33:16.428 "zcopy": false, 00:33:16.428 "get_zone_info": false, 00:33:16.428 "zone_management": false, 00:33:16.428 "zone_append": false, 00:33:16.428 "compare": false, 00:33:16.428 "compare_and_write": false, 00:33:16.428 "abort": false, 00:33:16.428 "seek_hole": true, 00:33:16.428 "seek_data": true, 00:33:16.428 "copy": false, 00:33:16.428 "nvme_iov_md": false 00:33:16.428 }, 00:33:16.428 "driver_specific": { 00:33:16.428 "lvol": { 00:33:16.428 "lvol_store_uuid": "ea28b2e9-8ab2-4da5-81de-dd86e4953926", 00:33:16.428 "base_bdev": "aio_bdev", 00:33:16.428 "thin_provision": false, 00:33:16.428 "num_allocated_clusters": 38, 00:33:16.428 "snapshot": false, 00:33:16.428 "clone": false, 00:33:16.428 "esnap_clone": false 00:33:16.428 } 00:33:16.428 } 00:33:16.428 } 00:33:16.428 ] 00:33:16.428 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:33:16.428 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:16.428 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:16.428 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:16.428 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:16.428 11:56:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:16.689 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:16.689 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 32f7402b-0577-4d5b-a9b3-1b6f381230b7 00:33:16.950 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ea28b2e9-8ab2-4da5-81de-dd86e4953926 00:33:16.950 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:17.210 00:33:17.210 real 0m17.744s 00:33:17.210 user 0m35.683s 00:33:17.210 sys 0m3.105s 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:17.210 ************************************ 00:33:17.210 END TEST lvs_grow_dirty 00:33:17.210 ************************************ 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:17.210 nvmf_trace.0 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:17.210 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:17.210 rmmod nvme_tcp 00:33:17.471 rmmod nvme_fabrics 00:33:17.471 rmmod nvme_keyring 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1311548 ']' 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1311548 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 1311548 ']' 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 1311548 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1311548 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1311548' 00:33:17.471 killing process with pid 1311548 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 1311548 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 1311548 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.471 11:56:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:20.016 00:33:20.016 real 0m45.170s 00:33:20.016 user 0m54.274s 00:33:20.016 sys 0m10.782s 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:20.016 ************************************ 00:33:20.016 END TEST nvmf_lvs_grow 00:33:20.016 ************************************ 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:20.016 ************************************ 00:33:20.016 START TEST nvmf_bdev_io_wait 00:33:20.016 ************************************ 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:20.016 * Looking for test storage... 00:33:20.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:20.016 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:20.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.016 --rc genhtml_branch_coverage=1 00:33:20.016 --rc genhtml_function_coverage=1 00:33:20.016 --rc genhtml_legend=1 00:33:20.017 --rc geninfo_all_blocks=1 00:33:20.017 --rc geninfo_unexecuted_blocks=1 00:33:20.017 00:33:20.017 ' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:20.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.017 --rc genhtml_branch_coverage=1 00:33:20.017 --rc genhtml_function_coverage=1 00:33:20.017 --rc genhtml_legend=1 00:33:20.017 --rc geninfo_all_blocks=1 00:33:20.017 --rc geninfo_unexecuted_blocks=1 00:33:20.017 00:33:20.017 ' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:20.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.017 --rc genhtml_branch_coverage=1 00:33:20.017 --rc genhtml_function_coverage=1 00:33:20.017 --rc genhtml_legend=1 00:33:20.017 --rc geninfo_all_blocks=1 00:33:20.017 --rc geninfo_unexecuted_blocks=1 00:33:20.017 00:33:20.017 ' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:20.017 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:20.017 --rc genhtml_branch_coverage=1 00:33:20.017 --rc genhtml_function_coverage=1 00:33:20.017 --rc genhtml_legend=1 00:33:20.017 --rc geninfo_all_blocks=1 00:33:20.017 --rc geninfo_unexecuted_blocks=1 00:33:20.017 00:33:20.017 ' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:20.017 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:33:20.018 11:56:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:28.161 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:28.161 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:28.161 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:28.161 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.161 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:28.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.612 ms 00:33:28.162 00:33:28.162 --- 10.0.0.2 ping statistics --- 00:33:28.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.162 rtt min/avg/max/mdev = 0.612/0.612/0.612/0.000 ms 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.327 ms 00:33:28.162 00:33:28.162 --- 10.0.0.1 ping statistics --- 00:33:28.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.162 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1316489 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1316489 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 1316489 ']' 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:28.162 11:56:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.162 [2024-11-15 11:56:52.909173] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:28.162 [2024-11-15 11:56:52.910317] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:28.162 [2024-11-15 11:56:52.910367] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.162 [2024-11-15 11:56:53.010627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:28.162 [2024-11-15 11:56:53.065211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.162 [2024-11-15 11:56:53.065262] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.162 [2024-11-15 11:56:53.065271] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.162 [2024-11-15 11:56:53.065279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.162 [2024-11-15 11:56:53.065286] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.162 [2024-11-15 11:56:53.067907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:28.162 [2024-11-15 11:56:53.068150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:28.162 [2024-11-15 11:56:53.068311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:28.162 [2024-11-15 11:56:53.068312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.162 [2024-11-15 11:56:53.068829] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 [2024-11-15 11:56:53.833932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:28.423 [2024-11-15 11:56:53.834321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:28.423 [2024-11-15 11:56:53.834423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:28.423 [2024-11-15 11:56:53.834570] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 [2024-11-15 11:56:53.845329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 Malloc0 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:28.423 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:28.423 [2024-11-15 11:56:53.917479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.684 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1316629 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1316631 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:28.685 { 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme$subsystem", 00:33:28.685 "trtype": "$TEST_TRANSPORT", 00:33:28.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "$NVMF_PORT", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.685 "hdgst": ${hdgst:-false}, 00:33:28.685 "ddgst": ${ddgst:-false} 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 } 00:33:28.685 EOF 00:33:28.685 )") 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1316633 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:28.685 { 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme$subsystem", 00:33:28.685 "trtype": "$TEST_TRANSPORT", 00:33:28.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "$NVMF_PORT", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.685 "hdgst": ${hdgst:-false}, 00:33:28.685 "ddgst": ${ddgst:-false} 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 } 00:33:28.685 EOF 00:33:28.685 )") 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1316636 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:28.685 { 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme$subsystem", 00:33:28.685 "trtype": "$TEST_TRANSPORT", 00:33:28.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "$NVMF_PORT", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.685 "hdgst": ${hdgst:-false}, 00:33:28.685 "ddgst": ${ddgst:-false} 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 } 00:33:28.685 EOF 00:33:28.685 )") 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:28.685 { 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme$subsystem", 00:33:28.685 "trtype": "$TEST_TRANSPORT", 00:33:28.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "$NVMF_PORT", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:28.685 "hdgst": ${hdgst:-false}, 00:33:28.685 "ddgst": ${ddgst:-false} 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 } 00:33:28.685 EOF 00:33:28.685 )") 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1316629 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme1", 00:33:28.685 "trtype": "tcp", 00:33:28.685 "traddr": "10.0.0.2", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "4420", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.685 "hdgst": false, 00:33:28.685 "ddgst": false 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 }' 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme1", 00:33:28.685 "trtype": "tcp", 00:33:28.685 "traddr": "10.0.0.2", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "4420", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.685 "hdgst": false, 00:33:28.685 "ddgst": false 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 }' 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme1", 00:33:28.685 "trtype": "tcp", 00:33:28.685 "traddr": "10.0.0.2", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "4420", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.685 "hdgst": false, 00:33:28.685 "ddgst": false 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 }' 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:33:28.685 11:56:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:28.685 "params": { 00:33:28.685 "name": "Nvme1", 00:33:28.685 "trtype": "tcp", 00:33:28.685 "traddr": "10.0.0.2", 00:33:28.685 "adrfam": "ipv4", 00:33:28.685 "trsvcid": "4420", 00:33:28.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:28.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:28.685 "hdgst": false, 00:33:28.685 "ddgst": false 00:33:28.685 }, 00:33:28.685 "method": "bdev_nvme_attach_controller" 00:33:28.685 }' 00:33:28.685 [2024-11-15 11:56:53.976361] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:28.685 [2024-11-15 11:56:53.976442] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:33:28.686 [2024-11-15 11:56:53.976909] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:28.686 [2024-11-15 11:56:53.976974] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:28.686 [2024-11-15 11:56:53.978772] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:28.686 [2024-11-15 11:56:53.978835] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:28.686 [2024-11-15 11:56:53.981279] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:28.686 [2024-11-15 11:56:53.981349] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:28.946 [2024-11-15 11:56:54.199813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.946 [2024-11-15 11:56:54.240293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:28.946 [2024-11-15 11:56:54.289885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.946 [2024-11-15 11:56:54.331758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:28.946 [2024-11-15 11:56:54.382203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.946 [2024-11-15 11:56:54.425261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:29.207 [2024-11-15 11:56:54.451836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.207 [2024-11-15 11:56:54.489875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:29.207 Running I/O for 1 seconds... 00:33:29.207 Running I/O for 1 seconds... 00:33:29.207 Running I/O for 1 seconds... 00:33:29.467 Running I/O for 1 seconds... 00:33:30.408 12604.00 IOPS, 49.23 MiB/s 00:33:30.408 Latency(us) 00:33:30.408 [2024-11-15T10:56:55.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.408 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:30.408 Nvme1n1 : 1.01 12642.67 49.39 0.00 0.00 10088.19 5188.27 12397.23 00:33:30.408 [2024-11-15T10:56:55.906Z] =================================================================================================================== 00:33:30.408 [2024-11-15T10:56:55.906Z] Total : 12642.67 49.39 0.00 0.00 10088.19 5188.27 12397.23 00:33:30.408 6633.00 IOPS, 25.91 MiB/s 00:33:30.408 Latency(us) 00:33:30.408 [2024-11-15T10:56:55.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.408 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:30.408 Nvme1n1 : 1.02 6691.17 26.14 0.00 0.00 18965.77 3194.88 33204.91 00:33:30.408 [2024-11-15T10:56:55.906Z] =================================================================================================================== 00:33:30.408 [2024-11-15T10:56:55.906Z] Total : 6691.17 26.14 0.00 0.00 18965.77 3194.88 33204.91 00:33:30.408 185576.00 IOPS, 724.91 MiB/s 00:33:30.408 Latency(us) 00:33:30.408 [2024-11-15T10:56:55.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.408 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:30.408 Nvme1n1 : 1.00 185208.03 723.47 0.00 0.00 687.21 310.61 1979.73 00:33:30.408 [2024-11-15T10:56:55.906Z] =================================================================================================================== 00:33:30.408 [2024-11-15T10:56:55.906Z] Total : 185208.03 723.47 0.00 0.00 687.21 310.61 1979.73 00:33:30.408 6782.00 IOPS, 26.49 MiB/s 00:33:30.408 Latency(us) 00:33:30.408 [2024-11-15T10:56:55.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.408 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:30.408 Nvme1n1 : 1.01 6903.84 26.97 0.00 0.00 18482.40 4532.91 37355.52 00:33:30.408 [2024-11-15T10:56:55.906Z] =================================================================================================================== 00:33:30.408 [2024-11-15T10:56:55.906Z] Total : 6903.84 26.97 0.00 0.00 18482.40 4532.91 37355.52 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1316631 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1316633 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1316636 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:30.408 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:30.408 rmmod nvme_tcp 00:33:30.408 rmmod nvme_fabrics 00:33:30.408 rmmod nvme_keyring 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1316489 ']' 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1316489 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 1316489 ']' 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 1316489 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:30.669 11:56:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1316489 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1316489' 00:33:30.669 killing process with pid 1316489 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 1316489 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 1316489 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:30.669 11:56:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:33.214 00:33:33.214 real 0m13.114s 00:33:33.214 user 0m15.932s 00:33:33.214 sys 0m7.740s 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:33.214 ************************************ 00:33:33.214 END TEST nvmf_bdev_io_wait 00:33:33.214 ************************************ 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:33.214 ************************************ 00:33:33.214 START TEST nvmf_queue_depth 00:33:33.214 ************************************ 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:33.214 * Looking for test storage... 00:33:33.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:33.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.214 --rc genhtml_branch_coverage=1 00:33:33.214 --rc genhtml_function_coverage=1 00:33:33.214 --rc genhtml_legend=1 00:33:33.214 --rc geninfo_all_blocks=1 00:33:33.214 --rc geninfo_unexecuted_blocks=1 00:33:33.214 00:33:33.214 ' 00:33:33.214 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:33.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.214 --rc genhtml_branch_coverage=1 00:33:33.214 --rc genhtml_function_coverage=1 00:33:33.214 --rc genhtml_legend=1 00:33:33.214 --rc geninfo_all_blocks=1 00:33:33.215 --rc geninfo_unexecuted_blocks=1 00:33:33.215 00:33:33.215 ' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.215 --rc genhtml_branch_coverage=1 00:33:33.215 --rc genhtml_function_coverage=1 00:33:33.215 --rc genhtml_legend=1 00:33:33.215 --rc geninfo_all_blocks=1 00:33:33.215 --rc geninfo_unexecuted_blocks=1 00:33:33.215 00:33:33.215 ' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:33.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:33.215 --rc genhtml_branch_coverage=1 00:33:33.215 --rc genhtml_function_coverage=1 00:33:33.215 --rc genhtml_legend=1 00:33:33.215 --rc geninfo_all_blocks=1 00:33:33.215 --rc geninfo_unexecuted_blocks=1 00:33:33.215 00:33:33.215 ' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:33:33.215 11:56:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:41.376 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:41.377 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:41.377 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:41.377 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:41.377 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:41.377 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:41.378 11:57:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:41.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:41.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.549 ms 00:33:41.378 00:33:41.378 --- 10.0.0.2 ping statistics --- 00:33:41.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.378 rtt min/avg/max/mdev = 0.549/0.549/0.549/0.000 ms 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:41.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:41.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:33:41.378 00:33:41.378 --- 10.0.0.1 ping statistics --- 00:33:41.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:41.378 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1321426 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1321426 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1321426 ']' 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:41.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:41.378 11:57:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.378 [2024-11-15 11:57:06.187072] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:41.378 [2024-11-15 11:57:06.188199] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:41.378 [2024-11-15 11:57:06.188249] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:41.378 [2024-11-15 11:57:06.294626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.378 [2024-11-15 11:57:06.345310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:41.378 [2024-11-15 11:57:06.345362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:41.378 [2024-11-15 11:57:06.345371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:41.378 [2024-11-15 11:57:06.345377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:41.379 [2024-11-15 11:57:06.345384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:41.379 [2024-11-15 11:57:06.346152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:41.379 [2024-11-15 11:57:06.429804] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:41.379 [2024-11-15 11:57:06.430102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:41.639 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:41.639 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:41.639 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:41.639 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:41.639 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.640 [2024-11-15 11:57:07.075017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.640 Malloc0 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:41.640 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 [2024-11-15 11:57:07.155209] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1321521 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1321521 /var/tmp/bdevperf.sock 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 1321521 ']' 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:41.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:41.901 11:57:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:41.901 [2024-11-15 11:57:07.223715] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:33:41.901 [2024-11-15 11:57:07.223795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1321521 ] 00:33:41.901 [2024-11-15 11:57:07.317578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.901 [2024-11-15 11:57:07.370702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:42.845 NVMe0n1 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:42.845 11:57:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:42.845 Running I/O for 10 seconds... 00:33:45.176 8192.00 IOPS, 32.00 MiB/s [2024-11-15T10:57:11.618Z] 8617.00 IOPS, 33.66 MiB/s [2024-11-15T10:57:12.558Z] 9209.33 IOPS, 35.97 MiB/s [2024-11-15T10:57:13.499Z] 10202.75 IOPS, 39.85 MiB/s [2024-11-15T10:57:14.439Z] 10821.60 IOPS, 42.27 MiB/s [2024-11-15T10:57:15.395Z] 11266.83 IOPS, 44.01 MiB/s [2024-11-15T10:57:16.359Z] 11564.14 IOPS, 45.17 MiB/s [2024-11-15T10:57:17.390Z] 11793.38 IOPS, 46.07 MiB/s [2024-11-15T10:57:18.774Z] 11994.89 IOPS, 46.86 MiB/s [2024-11-15T10:57:18.774Z] 12154.30 IOPS, 47.48 MiB/s 00:33:53.276 Latency(us) 00:33:53.276 [2024-11-15T10:57:18.774Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.276 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:53.276 Verification LBA range: start 0x0 length 0x4000 00:33:53.276 NVMe0n1 : 10.05 12178.96 47.57 0.00 0.00 83759.57 17694.72 78643.20 00:33:53.276 [2024-11-15T10:57:18.774Z] =================================================================================================================== 00:33:53.276 [2024-11-15T10:57:18.774Z] Total : 12178.96 47.57 0.00 0.00 83759.57 17694.72 78643.20 00:33:53.276 { 00:33:53.276 "results": [ 00:33:53.276 { 00:33:53.276 "job": "NVMe0n1", 00:33:53.276 "core_mask": "0x1", 00:33:53.276 "workload": "verify", 00:33:53.276 "status": "finished", 00:33:53.276 "verify_range": { 00:33:53.276 "start": 0, 00:33:53.276 "length": 16384 00:33:53.276 }, 00:33:53.276 "queue_depth": 1024, 00:33:53.276 "io_size": 4096, 00:33:53.276 "runtime": 10.053981, 00:33:53.276 "iops": 12178.956773441287, 00:33:53.276 "mibps": 47.574049896255026, 00:33:53.276 "io_failed": 0, 00:33:53.276 "io_timeout": 0, 00:33:53.276 "avg_latency_us": 83759.57154752668, 00:33:53.276 "min_latency_us": 17694.72, 00:33:53.276 "max_latency_us": 78643.2 00:33:53.276 } 00:33:53.276 ], 00:33:53.276 "core_count": 1 00:33:53.276 } 00:33:53.276 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1321521 00:33:53.276 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1321521 ']' 00:33:53.276 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1321521 00:33:53.276 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:53.276 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1321521 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1321521' 00:33:53.277 killing process with pid 1321521 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1321521 00:33:53.277 Received shutdown signal, test time was about 10.000000 seconds 00:33:53.277 00:33:53.277 Latency(us) 00:33:53.277 [2024-11-15T10:57:18.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.277 [2024-11-15T10:57:18.775Z] =================================================================================================================== 00:33:53.277 [2024-11-15T10:57:18.775Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1321521 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.277 rmmod nvme_tcp 00:33:53.277 rmmod nvme_fabrics 00:33:53.277 rmmod nvme_keyring 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1321426 ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1321426 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 1321426 ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 1321426 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1321426 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1321426' 00:33:53.277 killing process with pid 1321426 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 1321426 00:33:53.277 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 1321426 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.538 11:57:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.450 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.450 00:33:55.450 real 0m22.604s 00:33:55.450 user 0m24.656s 00:33:55.450 sys 0m7.634s 00:33:55.450 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:55.450 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:55.450 ************************************ 00:33:55.450 END TEST nvmf_queue_depth 00:33:55.450 ************************************ 00:33:55.712 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:55.712 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:55.712 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:55.712 11:57:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:55.712 ************************************ 00:33:55.712 START TEST nvmf_target_multipath 00:33:55.712 ************************************ 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:55.712 * Looking for test storage... 00:33:55.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:55.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.712 --rc genhtml_branch_coverage=1 00:33:55.712 --rc genhtml_function_coverage=1 00:33:55.712 --rc genhtml_legend=1 00:33:55.712 --rc geninfo_all_blocks=1 00:33:55.712 --rc geninfo_unexecuted_blocks=1 00:33:55.712 00:33:55.712 ' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:55.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.712 --rc genhtml_branch_coverage=1 00:33:55.712 --rc genhtml_function_coverage=1 00:33:55.712 --rc genhtml_legend=1 00:33:55.712 --rc geninfo_all_blocks=1 00:33:55.712 --rc geninfo_unexecuted_blocks=1 00:33:55.712 00:33:55.712 ' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:55.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.712 --rc genhtml_branch_coverage=1 00:33:55.712 --rc genhtml_function_coverage=1 00:33:55.712 --rc genhtml_legend=1 00:33:55.712 --rc geninfo_all_blocks=1 00:33:55.712 --rc geninfo_unexecuted_blocks=1 00:33:55.712 00:33:55.712 ' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:55.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.712 --rc genhtml_branch_coverage=1 00:33:55.712 --rc genhtml_function_coverage=1 00:33:55.712 --rc genhtml_legend=1 00:33:55.712 --rc geninfo_all_blocks=1 00:33:55.712 --rc geninfo_unexecuted_blocks=1 00:33:55.712 00:33:55.712 ' 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:55.712 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:33:55.974 11:57:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.119 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:04.119 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:04.120 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:04.120 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:04.120 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.733 ms 00:34:04.120 00:34:04.120 --- 10.0.0.2 ping statistics --- 00:34:04.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.120 rtt min/avg/max/mdev = 0.733/0.733/0.733/0.000 ms 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:34:04.120 00:34:04.120 --- 10.0.0.1 ping statistics --- 00:34:04.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.120 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:34:04.120 only one NIC for nvmf test 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:04.120 rmmod nvme_tcp 00:34:04.120 rmmod nvme_fabrics 00:34:04.120 rmmod nvme_keyring 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.120 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.121 11:57:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.508 00:34:05.508 real 0m9.953s 00:34:05.508 user 0m2.283s 00:34:05.508 sys 0m5.621s 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:05.508 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:34:05.508 ************************************ 00:34:05.508 END TEST nvmf_target_multipath 00:34:05.508 ************************************ 00:34:05.509 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:05.509 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:05.509 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:05.509 11:57:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:05.770 ************************************ 00:34:05.770 START TEST nvmf_zcopy 00:34:05.770 ************************************ 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:34:05.770 * Looking for test storage... 00:34:05.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:05.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.770 --rc genhtml_branch_coverage=1 00:34:05.770 --rc genhtml_function_coverage=1 00:34:05.770 --rc genhtml_legend=1 00:34:05.770 --rc geninfo_all_blocks=1 00:34:05.770 --rc geninfo_unexecuted_blocks=1 00:34:05.770 00:34:05.770 ' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:05.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.770 --rc genhtml_branch_coverage=1 00:34:05.770 --rc genhtml_function_coverage=1 00:34:05.770 --rc genhtml_legend=1 00:34:05.770 --rc geninfo_all_blocks=1 00:34:05.770 --rc geninfo_unexecuted_blocks=1 00:34:05.770 00:34:05.770 ' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:05.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.770 --rc genhtml_branch_coverage=1 00:34:05.770 --rc genhtml_function_coverage=1 00:34:05.770 --rc genhtml_legend=1 00:34:05.770 --rc geninfo_all_blocks=1 00:34:05.770 --rc geninfo_unexecuted_blocks=1 00:34:05.770 00:34:05.770 ' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:05.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.770 --rc genhtml_branch_coverage=1 00:34:05.770 --rc genhtml_function_coverage=1 00:34:05.770 --rc genhtml_legend=1 00:34:05.770 --rc geninfo_all_blocks=1 00:34:05.770 --rc geninfo_unexecuted_blocks=1 00:34:05.770 00:34:05.770 ' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.770 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.771 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.771 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.031 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.031 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.032 11:57:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:14.175 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:14.175 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:14.175 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:14.175 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:14.176 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:14.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:14.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:34:14.176 00:34:14.176 --- 10.0.0.2 ping statistics --- 00:34:14.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.176 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:14.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:14.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:34:14.176 00:34:14.176 --- 10.0.0.1 ping statistics --- 00:34:14.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:14.176 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1332571 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1332571 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 1332571 ']' 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:14.176 11:57:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.176 [2024-11-15 11:57:38.804604] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:14.176 [2024-11-15 11:57:38.805726] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:34:14.176 [2024-11-15 11:57:38.805773] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:14.176 [2024-11-15 11:57:38.904078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.176 [2024-11-15 11:57:38.953861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:14.176 [2024-11-15 11:57:38.953909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:14.176 [2024-11-15 11:57:38.953919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:14.176 [2024-11-15 11:57:38.953926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:14.176 [2024-11-15 11:57:38.953932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:14.176 [2024-11-15 11:57:38.954704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.176 [2024-11-15 11:57:39.032151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:14.176 [2024-11-15 11:57:39.032435] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.176 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.176 [2024-11-15 11:57:39.667533] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:14.438 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.438 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 [2024-11-15 11:57:39.695850] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 malloc0 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:14.439 { 00:34:14.439 "params": { 00:34:14.439 "name": "Nvme$subsystem", 00:34:14.439 "trtype": "$TEST_TRANSPORT", 00:34:14.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:14.439 "adrfam": "ipv4", 00:34:14.439 "trsvcid": "$NVMF_PORT", 00:34:14.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:14.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:14.439 "hdgst": ${hdgst:-false}, 00:34:14.439 "ddgst": ${ddgst:-false} 00:34:14.439 }, 00:34:14.439 "method": "bdev_nvme_attach_controller" 00:34:14.439 } 00:34:14.439 EOF 00:34:14.439 )") 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:14.439 11:57:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:14.439 "params": { 00:34:14.439 "name": "Nvme1", 00:34:14.439 "trtype": "tcp", 00:34:14.439 "traddr": "10.0.0.2", 00:34:14.439 "adrfam": "ipv4", 00:34:14.439 "trsvcid": "4420", 00:34:14.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:14.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:14.439 "hdgst": false, 00:34:14.439 "ddgst": false 00:34:14.439 }, 00:34:14.439 "method": "bdev_nvme_attach_controller" 00:34:14.439 }' 00:34:14.439 [2024-11-15 11:57:39.802804] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:34:14.439 [2024-11-15 11:57:39.802881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1332610 ] 00:34:14.439 [2024-11-15 11:57:39.900716] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:14.700 [2024-11-15 11:57:39.954354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:14.961 Running I/O for 10 seconds... 00:34:16.845 6390.00 IOPS, 49.92 MiB/s [2024-11-15T10:57:43.285Z] 6454.50 IOPS, 50.43 MiB/s [2024-11-15T10:57:44.669Z] 6476.00 IOPS, 50.59 MiB/s [2024-11-15T10:57:45.624Z] 6612.50 IOPS, 51.66 MiB/s [2024-11-15T10:57:46.564Z] 7220.80 IOPS, 56.41 MiB/s [2024-11-15T10:57:47.504Z] 7625.17 IOPS, 59.57 MiB/s [2024-11-15T10:57:48.444Z] 7921.29 IOPS, 61.89 MiB/s [2024-11-15T10:57:49.385Z] 8136.50 IOPS, 63.57 MiB/s [2024-11-15T10:57:50.324Z] 8305.44 IOPS, 64.89 MiB/s [2024-11-15T10:57:50.324Z] 8439.60 IOPS, 65.93 MiB/s 00:34:24.826 Latency(us) 00:34:24.826 [2024-11-15T10:57:50.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:24.826 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:24.826 Verification LBA range: start 0x0 length 0x1000 00:34:24.826 Nvme1n1 : 10.01 8444.74 65.97 0.00 0.00 15112.63 1747.63 27306.67 00:34:24.826 [2024-11-15T10:57:50.324Z] =================================================================================================================== 00:34:24.826 [2024-11-15T10:57:50.324Z] Total : 8444.74 65.97 0.00 0.00 15112.63 1747.63 27306.67 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1334603 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:25.086 { 00:34:25.086 "params": { 00:34:25.086 "name": "Nvme$subsystem", 00:34:25.086 "trtype": "$TEST_TRANSPORT", 00:34:25.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:25.086 "adrfam": "ipv4", 00:34:25.086 "trsvcid": "$NVMF_PORT", 00:34:25.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:25.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:25.086 "hdgst": ${hdgst:-false}, 00:34:25.086 "ddgst": ${ddgst:-false} 00:34:25.086 }, 00:34:25.086 "method": "bdev_nvme_attach_controller" 00:34:25.086 } 00:34:25.086 EOF 00:34:25.086 )") 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:34:25.086 [2024-11-15 11:57:50.387101] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.387128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:34:25.086 11:57:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:25.086 "params": { 00:34:25.086 "name": "Nvme1", 00:34:25.086 "trtype": "tcp", 00:34:25.086 "traddr": "10.0.0.2", 00:34:25.086 "adrfam": "ipv4", 00:34:25.086 "trsvcid": "4420", 00:34:25.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:25.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:25.086 "hdgst": false, 00:34:25.086 "ddgst": false 00:34:25.086 }, 00:34:25.086 "method": "bdev_nvme_attach_controller" 00:34:25.086 }' 00:34:25.086 [2024-11-15 11:57:50.399073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.399081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.411070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.411077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.423070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.423077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.430749] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:34:25.086 [2024-11-15 11:57:50.430797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1334603 ] 00:34:25.086 [2024-11-15 11:57:50.435070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.435078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.447069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.447077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.459069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.459077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.471069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.471075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.483070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.483077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.495070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.495076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.507071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.507080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.512093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:25.086 [2024-11-15 11:57:50.519070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.519080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.531070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.531079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.542369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:25.086 [2024-11-15 11:57:50.543070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.543078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.555073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.555082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.567075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.567088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.086 [2024-11-15 11:57:50.579071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.086 [2024-11-15 11:57:50.579082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.591071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.591082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.603070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.603078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.615077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.615093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.627073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.627082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.639072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.639082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.651071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.651081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.663070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.663078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.675069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.675076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.687070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.687077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.699071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.699080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.711069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.711077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.723069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.723077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.735070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.735079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.747069] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.747077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.759070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.759078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.771070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.771078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.783070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.783078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.795075] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.795090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 Running I/O for 5 seconds... 00:34:25.347 [2024-11-15 11:57:50.809630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.809646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.822399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.822415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.347 [2024-11-15 11:57:50.835568] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.347 [2024-11-15 11:57:50.835584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.850027] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.850042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.863207] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.863222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.876007] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.876022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.890164] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.890179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.903474] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.903488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.917885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.917900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.930836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.930851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.944451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.944469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.958388] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.958403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.971336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.971350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.986414] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.986430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:50.999099] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:50.999114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.012233] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.012249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.026727] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.026742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.039735] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.039750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.053695] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.053709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.067155] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.067169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.079839] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.079852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.607 [2024-11-15 11:57:51.094307] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.607 [2024-11-15 11:57:51.094322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.107033] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.107048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.119763] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.119777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.134302] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.134316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.147303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.147318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.160130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.160144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.174347] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.174361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.187592] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.187607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.202247] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.202269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.214911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.214926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.228182] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.228197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.242476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.242491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.255235] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.255250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.268168] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.268182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.282333] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.282349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.294885] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.294900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.307669] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.307683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.322055] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.322069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.335421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.335435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.350374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.350389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:25.868 [2024-11-15 11:57:51.363523] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:25.868 [2024-11-15 11:57:51.363537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.378278] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.378294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.391360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.391374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.405993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.406007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.419191] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.419206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.432152] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.432166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.446290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.446304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.459066] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.459083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.471853] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.471866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.486268] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.486282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.499349] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.499362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.514092] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.514107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.526909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.526924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.539687] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.539701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.554364] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.554378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.567410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.567424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.582298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.582312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.595351] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.595365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.610176] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.610190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.128 [2024-11-15 11:57:51.623253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.128 [2024-11-15 11:57:51.623267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.636170] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.636184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.650130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.650145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.663054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.663068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.676484] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.676498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.690940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.690953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.703871] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.703884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.717721] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.717740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.730823] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.730837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.744330] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.744345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.758769] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.758783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.771446] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.771461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.786003] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.786017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.798998] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.799013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 19015.00 IOPS, 148.55 MiB/s [2024-11-15T10:57:51.886Z] [2024-11-15 11:57:51.811954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.811968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.826184] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.826199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.838950] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.838965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.851603] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.851617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.866766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.866781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.388 [2024-11-15 11:57:51.879868] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.388 [2024-11-15 11:57:51.879882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.648 [2024-11-15 11:57:51.894171] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.648 [2024-11-15 11:57:51.894186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.648 [2024-11-15 11:57:51.906939] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.648 [2024-11-15 11:57:51.906954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.648 [2024-11-15 11:57:51.920410] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.648 [2024-11-15 11:57:51.920425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.648 [2024-11-15 11:57:51.934844] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.648 [2024-11-15 11:57:51.934858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.648 [2024-11-15 11:57:51.947737] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.648 [2024-11-15 11:57:51.947751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.648 [2024-11-15 11:57:51.962174] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:51.962189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:51.975091] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:51.975106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:51.987898] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:51.987912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.002044] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.002058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.014851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.014865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.027923] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.027937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.042089] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.042104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.054863] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.054878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.067573] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.067587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.081689] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.081703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.094312] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.094327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.107197] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.107211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.120495] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.120509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.649 [2024-11-15 11:57:52.134797] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.649 [2024-11-15 11:57:52.134812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.909 [2024-11-15 11:57:52.147851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.909 [2024-11-15 11:57:52.147865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.162659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.162673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.175435] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.175449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.190421] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.190436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.203490] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.203503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.218362] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.218376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.231384] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.231398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.246227] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.246241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.259121] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.259135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.271969] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.271983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.286447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.286461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.299183] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.299198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.311918] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.311932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.325977] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.325992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.338986] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.339001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.351719] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.351733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.366608] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.366623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.379772] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.379787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.910 [2024-11-15 11:57:52.394628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:26.910 [2024-11-15 11:57:52.394642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.407746] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.407761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.422339] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.422353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.435659] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.435674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.449973] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.449987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.463181] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.463196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.476054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.476072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.490024] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.490039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.502974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.502989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.516499] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.516514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.530386] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.530400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.543548] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.543567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.557967] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.557982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.571240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.571255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.584301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.584315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.598851] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.598866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.611888] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.611902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.626447] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.626462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.639392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.639407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.653968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.653982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.171 [2024-11-15 11:57:52.666903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.171 [2024-11-15 11:57:52.666917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.679521] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.679536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.694630] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.694644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.707650] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.707664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.722453] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.722468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.735271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.735289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.747783] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.747799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.762298] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.431 [2024-11-15 11:57:52.762313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.431 [2024-11-15 11:57:52.775448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.775463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.789983] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.789998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.803237] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.803253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 19084.50 IOPS, 149.10 MiB/s [2024-11-15T10:57:52.930Z] [2024-11-15 11:57:52.816230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.816245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.830417] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.830432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.843379] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.843393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.857895] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.857910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.870840] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.870854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.883723] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.883737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.898230] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.898245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.911242] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.911257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.432 [2024-11-15 11:57:52.924303] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.432 [2024-11-15 11:57:52.924318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:52.938305] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:52.938320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:52.951216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:52.951231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:52.963836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:52.963850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:52.977919] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:52.977934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:52.990948] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:52.990966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:53.003794] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:53.003810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:53.018519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:53.018534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:53.031472] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:53.031486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.692 [2024-11-15 11:57:53.046205] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.692 [2024-11-15 11:57:53.046220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.058961] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.058976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.071700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.071714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.086115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.086130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.099370] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.099384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.114460] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.114474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.127444] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.127457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.142217] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.142231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.155425] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.155438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.170064] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.170078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.693 [2024-11-15 11:57:53.182966] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.693 [2024-11-15 11:57:53.182980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.196301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.196315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.210336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.210350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.223527] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.223541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.238359] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.238373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.251283] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.251301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.264115] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.264129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.278569] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.953 [2024-11-15 11:57:53.278583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.953 [2024-11-15 11:57:53.291709] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.291723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.306494] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.306509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.319210] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.319224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.332030] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.332044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.346710] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.346724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.359677] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.359691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.373940] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.373954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.386838] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.386852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.399639] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.399652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.414073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.414087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.426904] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.426919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:27.954 [2024-11-15 11:57:53.440291] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:27.954 [2024-11-15 11:57:53.440305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.454374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.454389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.467451] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.467465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.482848] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.482863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.495914] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.495928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.510021] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.510035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.523163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.523178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.535951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.535965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.549909] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.549923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.563118] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.563133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.575891] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.575905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.590206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.590221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.603264] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.603279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.615890] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.615904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.630315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.630329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.643206] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.643220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.655928] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.655942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.669889] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.669903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.683256] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.683270] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.695700] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.695713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.214 [2024-11-15 11:57:53.709999] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.214 [2024-11-15 11:57:53.710013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.723025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.723039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.736374] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.736388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.750632] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.750647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.763439] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.763453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.778519] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.778533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.791653] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.791668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.806126] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.806140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 19096.00 IOPS, 149.19 MiB/s [2024-11-15T10:57:53.972Z] [2024-11-15 11:57:53.819141] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.819155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.831953] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.831968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.846400] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.846415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.859759] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.859773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.874194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.874208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.887401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.887414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.902120] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.902135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.915240] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.915255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.927771] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.927786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.942752] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.942766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.955663] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.955677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.474 [2024-11-15 11:57:53.970070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.474 [2024-11-15 11:57:53.970084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:53.983054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:53.983069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:53.996324] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:53.996338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.010602] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.010621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.023399] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.023413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.038628] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.038642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.051951] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.051965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.066366] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.066380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.079375] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.079389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.093974] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.093989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.106922] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.106937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.120511] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.120526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.134290] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.134305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.146911] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.146926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.159815] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.159829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.173915] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.173929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.186993] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.187007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.200037] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.200051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.214123] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.214138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.735 [2024-11-15 11:57:54.227392] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.735 [2024-11-15 11:57:54.227406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.242025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.242040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.254959] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.254974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.267854] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.267872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.282800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.282815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.295934] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.295949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.310301] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.310316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.323331] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.323345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.338336] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.338350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.351320] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.351335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.364145] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.364160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.378698] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.378713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.391505] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.391519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.405971] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.405986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.418995] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.419009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.431541] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.431555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.445780] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.445795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.458758] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.458772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.471530] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.471545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:28.996 [2024-11-15 11:57:54.486202] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:28.996 [2024-11-15 11:57:54.486217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.256 [2024-11-15 11:57:54.499483] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.256 [2024-11-15 11:57:54.499498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.256 [2024-11-15 11:57:54.514311] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.256 [2024-11-15 11:57:54.514325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.256 [2024-11-15 11:57:54.527552] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.256 [2024-11-15 11:57:54.527574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.256 [2024-11-15 11:57:54.542367] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.256 [2024-11-15 11:57:54.542382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.256 [2024-11-15 11:57:54.555476] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.256 [2024-11-15 11:57:54.555490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.256 [2024-11-15 11:57:54.569680] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.256 [2024-11-15 11:57:54.569695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.582812] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.582827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.595624] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.595639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.609968] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.609982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.622836] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.622851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.635325] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.635340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.648377] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.648392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.662271] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.662285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.675006] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.675022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.688317] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.688331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.701903] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.701918] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.715065] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.715080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.728061] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.728076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.257 [2024-11-15 11:57:54.741946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.257 [2024-11-15 11:57:54.741961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.754708] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.754724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.767200] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.767215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.780077] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.780095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.793804] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.793819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.806861] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.806876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 19097.75 IOPS, 149.20 MiB/s [2024-11-15T10:57:55.014Z] [2024-11-15 11:57:54.819897] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.819911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.834363] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.834377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.847361] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.847375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.862129] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.862144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.875405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.875419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.890213] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.890228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.903287] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.903301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.916043] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.916057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.930248] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.930262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.942800] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.942814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.956025] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.516 [2024-11-15 11:57:54.956039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.516 [2024-11-15 11:57:54.969920] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.517 [2024-11-15 11:57:54.969935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.517 [2024-11-15 11:57:54.982886] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.517 [2024-11-15 11:57:54.982900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.517 [2024-11-15 11:57:54.995651] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.517 [2024-11-15 11:57:54.995665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.517 [2024-11-15 11:57:55.010557] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.517 [2024-11-15 11:57:55.010578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.023749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.023764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.038194] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.038208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.051401] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.051414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.066036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.066049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.079051] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.079065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.091749] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.091763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.105784] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.105799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.118799] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.118813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.131757] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.131770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.146251] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.146265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.159405] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.159418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.174478] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.174493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.187084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.187099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.199729] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.199742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.214216] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.214230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.227119] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.227133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.240052] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.240066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.254672] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.254686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:29.777 [2024-11-15 11:57:55.267572] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:29.777 [2024-11-15 11:57:55.267586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.281906] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.281921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.295001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.295015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.307766] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.307780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.322163] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.322177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.335253] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.335267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.347972] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.347985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.361881] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.361895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.375001] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.375015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.388252] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.388265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.402612] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.402625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.415665] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.415678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.429768] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.429782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.442629] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.442643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.456128] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.456142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.470080] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.470094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.482926] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.482940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.495946] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.495960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.510156] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.510171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.039 [2024-11-15 11:57:55.522954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.039 [2024-11-15 11:57:55.522968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.535722] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.535737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.550179] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.550193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.563079] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.563093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.576458] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.576472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.590315] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.590329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.603245] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.603260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.616130] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.616144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.630360] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.630374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.643448] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.643462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.658036] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.658051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.671124] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.671138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.683835] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.683849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.698127] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.698141] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.711321] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.711336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.724054] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.724068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.738100] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.738114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.750901] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.750916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.763789] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.763803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.777954] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.777968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.300 [2024-11-15 11:57:55.790899] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.300 [2024-11-15 11:57:55.790917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.803987] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.804002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 19114.60 IOPS, 149.33 MiB/s [2024-11-15T10:57:56.060Z] [2024-11-15 11:57:55.816661] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.816675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 00:34:30.562 Latency(us) 00:34:30.562 [2024-11-15T10:57:56.060Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:30.562 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:30.562 Nvme1n1 : 5.01 19117.15 149.35 0.00 0.00 6689.62 2717.01 11687.25 00:34:30.562 [2024-11-15T10:57:56.060Z] =================================================================================================================== 00:34:30.562 [2024-11-15T10:57:56.060Z] Total : 19117.15 149.35 0.00 0.00 6689.62 2717.01 11687.25 00:34:30.562 [2024-11-15 11:57:55.827073] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.827085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.839084] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.839100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.851074] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.851087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.863076] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.863088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.875072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.875082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.887070] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.887079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.899072] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.899081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 [2024-11-15 11:57:55.911071] subsystem.c:2126:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:30.562 [2024-11-15 11:57:55.911080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:30.562 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1334603) - No such process 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1334603 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 delay0 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:30.562 11:57:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:30.562 [2024-11-15 11:57:56.035843] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:38.701 Initializing NVMe Controllers 00:34:38.701 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:38.701 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:38.701 Initialization complete. Launching workers. 00:34:38.701 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 316, failed: 7667 00:34:38.701 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7947, failed to submit 36 00:34:38.701 success 7821, unsuccessful 126, failed 0 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:38.701 rmmod nvme_tcp 00:34:38.701 rmmod nvme_fabrics 00:34:38.701 rmmod nvme_keyring 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1332571 ']' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1332571 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 1332571 ']' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 1332571 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1332571 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1332571' 00:34:38.701 killing process with pid 1332571 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 1332571 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 1332571 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:38.701 11:58:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:40.086 00:34:40.086 real 0m34.451s 00:34:40.086 user 0m43.967s 00:34:40.086 sys 0m12.674s 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:40.086 ************************************ 00:34:40.086 END TEST nvmf_zcopy 00:34:40.086 ************************************ 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:40.086 ************************************ 00:34:40.086 START TEST nvmf_nmic 00:34:40.086 ************************************ 00:34:40.086 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:40.347 * Looking for test storage... 00:34:40.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:40.347 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.348 --rc genhtml_branch_coverage=1 00:34:40.348 --rc genhtml_function_coverage=1 00:34:40.348 --rc genhtml_legend=1 00:34:40.348 --rc geninfo_all_blocks=1 00:34:40.348 --rc geninfo_unexecuted_blocks=1 00:34:40.348 00:34:40.348 ' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.348 --rc genhtml_branch_coverage=1 00:34:40.348 --rc genhtml_function_coverage=1 00:34:40.348 --rc genhtml_legend=1 00:34:40.348 --rc geninfo_all_blocks=1 00:34:40.348 --rc geninfo_unexecuted_blocks=1 00:34:40.348 00:34:40.348 ' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.348 --rc genhtml_branch_coverage=1 00:34:40.348 --rc genhtml_function_coverage=1 00:34:40.348 --rc genhtml_legend=1 00:34:40.348 --rc geninfo_all_blocks=1 00:34:40.348 --rc geninfo_unexecuted_blocks=1 00:34:40.348 00:34:40.348 ' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:40.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:40.348 --rc genhtml_branch_coverage=1 00:34:40.348 --rc genhtml_function_coverage=1 00:34:40.348 --rc genhtml_legend=1 00:34:40.348 --rc geninfo_all_blocks=1 00:34:40.348 --rc geninfo_unexecuted_blocks=1 00:34:40.348 00:34:40.348 ' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:40.348 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:40.349 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:40.349 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:40.349 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:40.349 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:34:40.349 11:58:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:48.496 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:48.496 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.496 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:48.497 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:48.497 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:48.497 11:58:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:48.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:48.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:34:48.497 00:34:48.497 --- 10.0.0.2 ping statistics --- 00:34:48.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.497 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:48.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:48.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:48.497 00:34:48.497 --- 10.0.0.1 ping statistics --- 00:34:48.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:48.497 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1341263 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1341263 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 1341263 ']' 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:48.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:48.497 11:58:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:48.497 [2024-11-15 11:58:13.334827] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:48.497 [2024-11-15 11:58:13.335967] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:34:48.497 [2024-11-15 11:58:13.336018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:48.497 [2024-11-15 11:58:13.436573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:48.497 [2024-11-15 11:58:13.491306] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:48.497 [2024-11-15 11:58:13.491358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:48.497 [2024-11-15 11:58:13.491367] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:48.497 [2024-11-15 11:58:13.491374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:48.497 [2024-11-15 11:58:13.491381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:48.497 [2024-11-15 11:58:13.493462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:48.497 [2024-11-15 11:58:13.493632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:48.497 [2024-11-15 11:58:13.493705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:48.497 [2024-11-15 11:58:13.493705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:48.497 [2024-11-15 11:58:13.572590] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:48.497 [2024-11-15 11:58:13.573571] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:48.497 [2024-11-15 11:58:13.573823] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:48.497 [2024-11-15 11:58:13.574332] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:48.497 [2024-11-15 11:58:13.574389] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:48.759 [2024-11-15 11:58:14.187230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:48.759 Malloc0 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:48.759 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 [2024-11-15 11:58:14.283488] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:49.020 test case1: single bdev can't be used in multiple subsystems 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 [2024-11-15 11:58:14.318856] bdev.c:8502:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:49.020 [2024-11-15 11:58:14.318883] subsystem.c:2156:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:49.020 [2024-11-15 11:58:14.318893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:49.020 request: 00:34:49.020 { 00:34:49.020 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:49.020 "namespace": { 00:34:49.020 "bdev_name": "Malloc0", 00:34:49.020 "no_auto_visible": false, 00:34:49.020 "no_metadata": false 00:34:49.020 }, 00:34:49.020 "method": "nvmf_subsystem_add_ns", 00:34:49.020 "req_id": 1 00:34:49.020 } 00:34:49.020 Got JSON-RPC error response 00:34:49.020 response: 00:34:49.020 { 00:34:49.020 "code": -32602, 00:34:49.020 "message": "Invalid parameters" 00:34:49.020 } 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:49.020 Adding namespace failed - expected result. 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:49.020 test case2: host connect to nvmf target in multiple paths 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:49.020 [2024-11-15 11:58:14.331006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:49.020 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:49.593 11:58:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:49.853 11:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:49.853 11:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:34:49.853 11:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:49.853 11:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:49.854 11:58:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:34:52.399 11:58:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:52.399 [global] 00:34:52.399 thread=1 00:34:52.399 invalidate=1 00:34:52.399 rw=write 00:34:52.399 time_based=1 00:34:52.399 runtime=1 00:34:52.399 ioengine=libaio 00:34:52.399 direct=1 00:34:52.399 bs=4096 00:34:52.399 iodepth=1 00:34:52.399 norandommap=0 00:34:52.399 numjobs=1 00:34:52.399 00:34:52.399 verify_dump=1 00:34:52.399 verify_backlog=512 00:34:52.399 verify_state_save=0 00:34:52.399 do_verify=1 00:34:52.399 verify=crc32c-intel 00:34:52.399 [job0] 00:34:52.399 filename=/dev/nvme0n1 00:34:52.400 Could not set queue depth (nvme0n1) 00:34:52.400 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:52.400 fio-3.35 00:34:52.400 Starting 1 thread 00:34:53.782 00:34:53.782 job0: (groupid=0, jobs=1): err= 0: pid=1342174: Fri Nov 15 11:58:18 2024 00:34:53.782 read: IOPS=17, BW=69.5KiB/s (71.2kB/s)(72.0KiB/1036msec) 00:34:53.782 slat (nsec): min=9943, max=26376, avg=24275.00, stdev=3600.71 00:34:53.782 clat (usec): min=956, max=42016, avg=39662.36, stdev=9660.02 00:34:53.782 lat (usec): min=966, max=42041, avg=39686.63, stdev=9663.60 00:34:53.782 clat percentiles (usec): 00:34:53.782 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[41681], 20.00th=[41681], 00:34:53.782 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:53.782 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:53.782 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:53.782 | 99.99th=[42206] 00:34:53.782 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:34:53.783 slat (nsec): min=9511, max=63352, avg=28312.78, stdev=9707.71 00:34:53.783 clat (usec): min=239, max=856, avg=593.09, stdev=103.76 00:34:53.783 lat (usec): min=272, max=867, avg=621.41, stdev=107.80 00:34:53.783 clat percentiles (usec): 00:34:53.783 | 1.00th=[ 351], 5.00th=[ 396], 10.00th=[ 445], 20.00th=[ 510], 00:34:53.783 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 627], 00:34:53.783 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 709], 95.00th=[ 734], 00:34:53.783 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 857], 99.95th=[ 857], 00:34:53.783 | 99.99th=[ 857] 00:34:53.783 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:34:53.783 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:53.783 lat (usec) : 250=0.19%, 500=17.55%, 750=75.66%, 1000=3.40% 00:34:53.783 lat (msec) : 50=3.21% 00:34:53.783 cpu : usr=0.87%, sys=1.26%, ctx=530, majf=0, minf=1 00:34:53.783 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:53.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:53.783 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:53.783 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:53.783 00:34:53.783 Run status group 0 (all jobs): 00:34:53.783 READ: bw=69.5KiB/s (71.2kB/s), 69.5KiB/s-69.5KiB/s (71.2kB/s-71.2kB/s), io=72.0KiB (73.7kB), run=1036-1036msec 00:34:53.783 WRITE: bw=1977KiB/s (2024kB/s), 1977KiB/s-1977KiB/s (2024kB/s-2024kB/s), io=2048KiB (2097kB), run=1036-1036msec 00:34:53.783 00:34:53.783 Disk stats (read/write): 00:34:53.783 nvme0n1: ios=64/512, merge=0/0, ticks=608/300, in_queue=908, util=93.89% 00:34:53.783 11:58:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:53.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:53.783 rmmod nvme_tcp 00:34:53.783 rmmod nvme_fabrics 00:34:53.783 rmmod nvme_keyring 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1341263 ']' 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1341263 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 1341263 ']' 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 1341263 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1341263 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1341263' 00:34:53.783 killing process with pid 1341263 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 1341263 00:34:53.783 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 1341263 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:54.044 11:58:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:55.956 00:34:55.956 real 0m15.810s 00:34:55.956 user 0m33.093s 00:34:55.956 sys 0m7.348s 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:55.956 ************************************ 00:34:55.956 END TEST nvmf_nmic 00:34:55.956 ************************************ 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:55.956 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:56.218 ************************************ 00:34:56.218 START TEST nvmf_fio_target 00:34:56.218 ************************************ 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:56.218 * Looking for test storage... 00:34:56.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:56.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.218 --rc genhtml_branch_coverage=1 00:34:56.218 --rc genhtml_function_coverage=1 00:34:56.218 --rc genhtml_legend=1 00:34:56.218 --rc geninfo_all_blocks=1 00:34:56.218 --rc geninfo_unexecuted_blocks=1 00:34:56.218 00:34:56.218 ' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:56.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.218 --rc genhtml_branch_coverage=1 00:34:56.218 --rc genhtml_function_coverage=1 00:34:56.218 --rc genhtml_legend=1 00:34:56.218 --rc geninfo_all_blocks=1 00:34:56.218 --rc geninfo_unexecuted_blocks=1 00:34:56.218 00:34:56.218 ' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:56.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.218 --rc genhtml_branch_coverage=1 00:34:56.218 --rc genhtml_function_coverage=1 00:34:56.218 --rc genhtml_legend=1 00:34:56.218 --rc geninfo_all_blocks=1 00:34:56.218 --rc geninfo_unexecuted_blocks=1 00:34:56.218 00:34:56.218 ' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:56.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:56.218 --rc genhtml_branch_coverage=1 00:34:56.218 --rc genhtml_function_coverage=1 00:34:56.218 --rc genhtml_legend=1 00:34:56.218 --rc geninfo_all_blocks=1 00:34:56.218 --rc geninfo_unexecuted_blocks=1 00:34:56.218 00:34:56.218 ' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:56.218 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:56.219 11:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:04.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:04.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:04.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:04.361 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:04.362 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:04.362 11:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:04.362 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:04.362 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:35:04.362 00:35:04.362 --- 10.0.0.2 ping statistics --- 00:35:04.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.362 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:04.362 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:04.362 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:35:04.362 00:35:04.362 --- 10.0.0.1 ping statistics --- 00:35:04.362 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:04.362 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1346711 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1346711 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 1346711 ']' 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:04.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:04.362 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.362 [2024-11-15 11:58:29.163391] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:04.362 [2024-11-15 11:58:29.164544] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:35:04.362 [2024-11-15 11:58:29.164605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:04.362 [2024-11-15 11:58:29.264821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:04.362 [2024-11-15 11:58:29.318301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:04.362 [2024-11-15 11:58:29.318353] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:04.362 [2024-11-15 11:58:29.318362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:04.362 [2024-11-15 11:58:29.318369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:04.362 [2024-11-15 11:58:29.318376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:04.362 [2024-11-15 11:58:29.320846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:04.362 [2024-11-15 11:58:29.321004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:04.362 [2024-11-15 11:58:29.321166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:04.362 [2024-11-15 11:58:29.321166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:04.362 [2024-11-15 11:58:29.399496] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:04.362 [2024-11-15 11:58:29.400454] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:04.362 [2024-11-15 11:58:29.400774] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:04.362 [2024-11-15 11:58:29.401402] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:04.362 [2024-11-15 11:58:29.401413] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:04.623 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:04.623 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:35:04.623 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:04.623 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:04.623 11:58:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:04.623 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:04.623 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:04.883 [2024-11-15 11:58:30.190063] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:04.883 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:05.144 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:35:05.144 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:05.517 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:35:05.517 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:05.517 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:35:05.517 11:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:05.788 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:35:05.788 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:35:05.788 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.072 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:35:06.072 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.414 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:35:06.414 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:35:06.414 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:35:06.414 11:58:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:35:06.705 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:06.967 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:06.967 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:06.967 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:35:06.967 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:35:07.229 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:07.490 [2024-11-15 11:58:32.778006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:07.490 11:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:35:07.751 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:35:07.751 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:08.323 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:35:08.323 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:35:08.323 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:08.323 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:35:08.323 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:35:08.323 11:58:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:35:10.237 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:10.238 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:10.238 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:10.238 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:35:10.238 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:10.238 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:35:10.238 11:58:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:35:10.498 [global] 00:35:10.498 thread=1 00:35:10.498 invalidate=1 00:35:10.498 rw=write 00:35:10.498 time_based=1 00:35:10.498 runtime=1 00:35:10.498 ioengine=libaio 00:35:10.498 direct=1 00:35:10.498 bs=4096 00:35:10.498 iodepth=1 00:35:10.498 norandommap=0 00:35:10.498 numjobs=1 00:35:10.498 00:35:10.498 verify_dump=1 00:35:10.498 verify_backlog=512 00:35:10.498 verify_state_save=0 00:35:10.498 do_verify=1 00:35:10.498 verify=crc32c-intel 00:35:10.498 [job0] 00:35:10.498 filename=/dev/nvme0n1 00:35:10.498 [job1] 00:35:10.498 filename=/dev/nvme0n2 00:35:10.498 [job2] 00:35:10.498 filename=/dev/nvme0n3 00:35:10.498 [job3] 00:35:10.498 filename=/dev/nvme0n4 00:35:10.498 Could not set queue depth (nvme0n1) 00:35:10.498 Could not set queue depth (nvme0n2) 00:35:10.498 Could not set queue depth (nvme0n3) 00:35:10.498 Could not set queue depth (nvme0n4) 00:35:10.759 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:10.759 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:10.759 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:10.759 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:10.759 fio-3.35 00:35:10.759 Starting 4 threads 00:35:12.144 00:35:12.144 job0: (groupid=0, jobs=1): err= 0: pid=1348198: Fri Nov 15 11:58:37 2024 00:35:12.144 read: IOPS=16, BW=67.2KiB/s (68.8kB/s)(68.0KiB/1012msec) 00:35:12.144 slat (nsec): min=27758, max=29103, avg=28105.59, stdev=299.95 00:35:12.144 clat (usec): min=758, max=42042, avg=39299.42, stdev=9940.00 00:35:12.144 lat (usec): min=787, max=42070, avg=39327.52, stdev=9939.74 00:35:12.144 clat percentiles (usec): 00:35:12.144 | 1.00th=[ 758], 5.00th=[ 758], 10.00th=[41157], 20.00th=[41157], 00:35:12.144 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:12.144 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:12.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:12.144 | 99.99th=[42206] 00:35:12.144 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:35:12.144 slat (nsec): min=9548, max=56626, avg=33046.14, stdev=9488.12 00:35:12.144 clat (usec): min=253, max=1810, avg=629.66, stdev=153.34 00:35:12.144 lat (usec): min=262, max=1852, avg=662.70, stdev=156.74 00:35:12.144 clat percentiles (usec): 00:35:12.144 | 1.00th=[ 297], 5.00th=[ 392], 10.00th=[ 433], 20.00th=[ 494], 00:35:12.144 | 30.00th=[ 545], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 676], 00:35:12.144 | 70.00th=[ 717], 80.00th=[ 742], 90.00th=[ 807], 95.00th=[ 857], 00:35:12.144 | 99.00th=[ 996], 99.50th=[ 1045], 99.90th=[ 1811], 99.95th=[ 1811], 00:35:12.144 | 99.99th=[ 1811] 00:35:12.144 bw ( KiB/s): min= 4087, max= 4087, per=51.34%, avg=4087.00, stdev= 0.00, samples=1 00:35:12.144 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:35:12.144 lat (usec) : 500=20.79%, 750=57.84%, 1000=17.39% 00:35:12.144 lat (msec) : 2=0.95%, 50=3.02% 00:35:12.144 cpu : usr=1.19%, sys=1.98%, ctx=532, majf=0, minf=1 00:35:12.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.144 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.144 job1: (groupid=0, jobs=1): err= 0: pid=1348209: Fri Nov 15 11:58:37 2024 00:35:12.144 read: IOPS=16, BW=66.1KiB/s (67.7kB/s)(68.0KiB/1029msec) 00:35:12.144 slat (nsec): min=26088, max=27040, avg=26546.94, stdev=198.45 00:35:12.144 clat (usec): min=40900, max=42031, avg=41612.30, stdev=470.76 00:35:12.144 lat (usec): min=40927, max=42058, avg=41638.84, stdev=470.69 00:35:12.144 clat percentiles (usec): 00:35:12.144 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:12.144 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:12.144 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:12.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:12.144 | 99.99th=[42206] 00:35:12.144 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:35:12.144 slat (nsec): min=9346, max=68658, avg=29398.41, stdev=9229.15 00:35:12.144 clat (usec): min=235, max=1024, avg=590.18, stdev=156.69 00:35:12.144 lat (usec): min=268, max=1058, avg=619.58, stdev=160.53 00:35:12.144 clat percentiles (usec): 00:35:12.144 | 1.00th=[ 269], 5.00th=[ 334], 10.00th=[ 367], 20.00th=[ 445], 00:35:12.144 | 30.00th=[ 494], 40.00th=[ 545], 50.00th=[ 603], 60.00th=[ 652], 00:35:12.144 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 832], 00:35:12.144 | 99.00th=[ 938], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:35:12.144 | 99.99th=[ 1029] 00:35:12.144 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.144 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.144 lat (usec) : 250=0.19%, 500=30.43%, 750=51.61%, 1000=13.99% 00:35:12.144 lat (msec) : 2=0.57%, 50=3.21% 00:35:12.144 cpu : usr=0.68%, sys=1.85%, ctx=530, majf=0, minf=2 00:35:12.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.144 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.144 job2: (groupid=0, jobs=1): err= 0: pid=1348233: Fri Nov 15 11:58:37 2024 00:35:12.144 read: IOPS=16, BW=67.3KiB/s (68.9kB/s)(68.0KiB/1011msec) 00:35:12.144 slat (nsec): min=27314, max=28126, avg=27544.41, stdev=216.40 00:35:12.144 clat (usec): min=40924, max=41996, avg=41615.60, stdev=471.61 00:35:12.144 lat (usec): min=40952, max=42023, avg=41643.14, stdev=471.63 00:35:12.144 clat percentiles (usec): 00:35:12.144 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:12.145 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:35:12.145 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:12.145 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:12.145 | 99.99th=[42206] 00:35:12.145 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:35:12.145 slat (nsec): min=9779, max=56652, avg=27163.62, stdev=12300.36 00:35:12.145 clat (usec): min=187, max=999, avg=556.02, stdev=177.38 00:35:12.145 lat (usec): min=208, max=1034, avg=583.19, stdev=185.24 00:35:12.145 clat percentiles (usec): 00:35:12.145 | 1.00th=[ 253], 5.00th=[ 285], 10.00th=[ 322], 20.00th=[ 359], 00:35:12.145 | 30.00th=[ 437], 40.00th=[ 502], 50.00th=[ 570], 60.00th=[ 619], 00:35:12.145 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 783], 95.00th=[ 832], 00:35:12.145 | 99.00th=[ 914], 99.50th=[ 947], 99.90th=[ 996], 99.95th=[ 996], 00:35:12.145 | 99.99th=[ 996] 00:35:12.145 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.145 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.145 lat (usec) : 250=0.95%, 500=37.43%, 750=43.48%, 1000=14.93% 00:35:12.145 lat (msec) : 50=3.21% 00:35:12.145 cpu : usr=1.39%, sys=1.09%, ctx=530, majf=0, minf=1 00:35:12.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.145 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.145 job3: (groupid=0, jobs=1): err= 0: pid=1348239: Fri Nov 15 11:58:37 2024 00:35:12.145 read: IOPS=17, BW=71.5KiB/s (73.2kB/s)(72.0KiB/1007msec) 00:35:12.145 slat (nsec): min=10495, max=27735, avg=26573.56, stdev=4017.49 00:35:12.145 clat (usec): min=40914, max=41813, avg=41041.77, stdev=225.24 00:35:12.145 lat (usec): min=40941, max=41840, avg=41068.35, stdev=223.53 00:35:12.145 clat percentiles (usec): 00:35:12.145 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:12.145 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:12.145 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:35:12.145 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:35:12.145 | 99.99th=[41681] 00:35:12.145 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:35:12.145 slat (nsec): min=9380, max=73720, avg=32210.85, stdev=10159.38 00:35:12.145 clat (usec): min=171, max=1601, avg=481.02, stdev=134.17 00:35:12.145 lat (usec): min=183, max=1644, avg=513.23, stdev=137.68 00:35:12.145 clat percentiles (usec): 00:35:12.145 | 1.00th=[ 229], 5.00th=[ 273], 10.00th=[ 314], 20.00th=[ 371], 00:35:12.145 | 30.00th=[ 412], 40.00th=[ 449], 50.00th=[ 478], 60.00th=[ 510], 00:35:12.145 | 70.00th=[ 537], 80.00th=[ 586], 90.00th=[ 652], 95.00th=[ 701], 00:35:12.145 | 99.00th=[ 766], 99.50th=[ 775], 99.90th=[ 1598], 99.95th=[ 1598], 00:35:12.145 | 99.99th=[ 1598] 00:35:12.145 bw ( KiB/s): min= 4096, max= 4096, per=51.45%, avg=4096.00, stdev= 0.00, samples=1 00:35:12.145 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:12.145 lat (usec) : 250=2.45%, 500=53.02%, 750=39.06%, 1000=1.89% 00:35:12.145 lat (msec) : 2=0.19%, 50=3.40% 00:35:12.145 cpu : usr=1.19%, sys=1.79%, ctx=532, majf=0, minf=1 00:35:12.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:12.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:12.145 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:12.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:12.145 00:35:12.145 Run status group 0 (all jobs): 00:35:12.145 READ: bw=268KiB/s (275kB/s), 66.1KiB/s-71.5KiB/s (67.7kB/s-73.2kB/s), io=276KiB (283kB), run=1007-1029msec 00:35:12.145 WRITE: bw=7961KiB/s (8152kB/s), 1990KiB/s-2034KiB/s (2038kB/s-2083kB/s), io=8192KiB (8389kB), run=1007-1029msec 00:35:12.145 00:35:12.145 Disk stats (read/write): 00:35:12.145 nvme0n1: ios=64/512, merge=0/0, ticks=1137/260, in_queue=1397, util=96.49% 00:35:12.145 nvme0n2: ios=48/512, merge=0/0, ticks=621/258, in_queue=879, util=91.11% 00:35:12.145 nvme0n3: ios=35/512, merge=0/0, ticks=1460/236, in_queue=1696, util=97.14% 00:35:12.145 nvme0n4: ios=70/512, merge=0/0, ticks=1074/189, in_queue=1263, util=97.11% 00:35:12.145 11:58:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:35:12.145 [global] 00:35:12.145 thread=1 00:35:12.145 invalidate=1 00:35:12.145 rw=randwrite 00:35:12.145 time_based=1 00:35:12.145 runtime=1 00:35:12.145 ioengine=libaio 00:35:12.145 direct=1 00:35:12.145 bs=4096 00:35:12.145 iodepth=1 00:35:12.145 norandommap=0 00:35:12.145 numjobs=1 00:35:12.145 00:35:12.145 verify_dump=1 00:35:12.145 verify_backlog=512 00:35:12.145 verify_state_save=0 00:35:12.145 do_verify=1 00:35:12.145 verify=crc32c-intel 00:35:12.145 [job0] 00:35:12.145 filename=/dev/nvme0n1 00:35:12.145 [job1] 00:35:12.145 filename=/dev/nvme0n2 00:35:12.145 [job2] 00:35:12.145 filename=/dev/nvme0n3 00:35:12.145 [job3] 00:35:12.145 filename=/dev/nvme0n4 00:35:12.145 Could not set queue depth (nvme0n1) 00:35:12.145 Could not set queue depth (nvme0n2) 00:35:12.145 Could not set queue depth (nvme0n3) 00:35:12.145 Could not set queue depth (nvme0n4) 00:35:12.406 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.406 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.406 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.406 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:12.406 fio-3.35 00:35:12.406 Starting 4 threads 00:35:13.786 00:35:13.786 job0: (groupid=0, jobs=1): err= 0: pid=1348647: Fri Nov 15 11:58:39 2024 00:35:13.786 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:13.786 slat (nsec): min=7673, max=47151, avg=26682.21, stdev=3035.87 00:35:13.786 clat (usec): min=560, max=1321, avg=1002.59, stdev=104.69 00:35:13.786 lat (usec): min=587, max=1348, avg=1029.28, stdev=104.91 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 709], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 930], 00:35:13.786 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1029], 00:35:13.786 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1172], 00:35:13.786 | 99.00th=[ 1221], 99.50th=[ 1270], 99.90th=[ 1319], 99.95th=[ 1319], 00:35:13.786 | 99.99th=[ 1319] 00:35:13.786 write: IOPS=705, BW=2821KiB/s (2889kB/s)(2824KiB/1001msec); 0 zone resets 00:35:13.786 slat (nsec): min=9617, max=59884, avg=31129.68, stdev=8612.82 00:35:13.786 clat (usec): min=175, max=1472, avg=624.56, stdev=146.57 00:35:13.786 lat (usec): min=187, max=1506, avg=655.69, stdev=149.24 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 277], 5.00th=[ 371], 10.00th=[ 429], 20.00th=[ 494], 00:35:13.786 | 30.00th=[ 562], 40.00th=[ 611], 50.00th=[ 635], 60.00th=[ 668], 00:35:13.786 | 70.00th=[ 701], 80.00th=[ 734], 90.00th=[ 799], 95.00th=[ 857], 00:35:13.786 | 99.00th=[ 922], 99.50th=[ 988], 99.90th=[ 1467], 99.95th=[ 1467], 00:35:13.786 | 99.99th=[ 1467] 00:35:13.786 bw ( KiB/s): min= 4096, max= 4096, per=42.66%, avg=4096.00, stdev= 0.00, samples=1 00:35:13.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:13.786 lat (usec) : 250=0.25%, 500=11.74%, 750=36.78%, 1000=28.98% 00:35:13.786 lat (msec) : 2=22.25% 00:35:13.786 cpu : usr=1.60%, sys=4.00%, ctx=1223, majf=0, minf=1 00:35:13.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 issued rwts: total=512,706,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:13.786 job1: (groupid=0, jobs=1): err= 0: pid=1348664: Fri Nov 15 11:58:39 2024 00:35:13.786 read: IOPS=15, BW=62.4KiB/s (63.9kB/s)(64.0KiB/1025msec) 00:35:13.786 slat (nsec): min=26355, max=27825, avg=26887.31, stdev=378.56 00:35:13.786 clat (usec): min=40893, max=42156, avg=41678.87, stdev=427.66 00:35:13.786 lat (usec): min=40920, max=42183, avg=41705.76, stdev=427.48 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:35:13.786 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:35:13.786 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:13.786 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:13.786 | 99.99th=[42206] 00:35:13.786 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:35:13.786 slat (nsec): min=10023, max=66331, avg=32887.84, stdev=6712.95 00:35:13.786 clat (usec): min=203, max=1104, avg=656.80, stdev=164.51 00:35:13.786 lat (usec): min=215, max=1115, avg=689.69, stdev=165.77 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 273], 5.00th=[ 334], 10.00th=[ 441], 20.00th=[ 519], 00:35:13.786 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 668], 60.00th=[ 709], 00:35:13.786 | 70.00th=[ 750], 80.00th=[ 791], 90.00th=[ 865], 95.00th=[ 906], 00:35:13.786 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 1106], 99.95th=[ 1106], 00:35:13.786 | 99.99th=[ 1106] 00:35:13.786 bw ( KiB/s): min= 4096, max= 4096, per=42.66%, avg=4096.00, stdev= 0.00, samples=1 00:35:13.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:13.786 lat (usec) : 250=0.57%, 500=15.72%, 750=53.03%, 1000=26.89% 00:35:13.786 lat (msec) : 2=0.76%, 50=3.03% 00:35:13.786 cpu : usr=0.88%, sys=1.56%, ctx=530, majf=0, minf=1 00:35:13.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:13.786 job2: (groupid=0, jobs=1): err= 0: pid=1348683: Fri Nov 15 11:58:39 2024 00:35:13.786 read: IOPS=16, BW=65.9KiB/s (67.5kB/s)(68.0KiB/1032msec) 00:35:13.786 slat (nsec): min=26046, max=26443, avg=26231.47, stdev=99.09 00:35:13.786 clat (usec): min=1169, max=42240, avg=39481.53, stdev=9877.74 00:35:13.786 lat (usec): min=1195, max=42266, avg=39507.76, stdev=9877.79 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 1172], 5.00th=[ 1172], 10.00th=[40633], 20.00th=[41681], 00:35:13.786 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:35:13.786 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:13.786 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:13.786 | 99.99th=[42206] 00:35:13.786 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:35:13.786 slat (nsec): min=9932, max=65298, avg=32876.19, stdev=7632.33 00:35:13.786 clat (usec): min=231, max=1049, avg=662.04, stdev=145.92 00:35:13.786 lat (usec): min=269, max=1084, avg=694.91, stdev=147.32 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 306], 5.00th=[ 400], 10.00th=[ 457], 20.00th=[ 545], 00:35:13.786 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 676], 60.00th=[ 717], 00:35:13.786 | 70.00th=[ 742], 80.00th=[ 783], 90.00th=[ 840], 95.00th=[ 881], 00:35:13.786 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1057], 99.95th=[ 1057], 00:35:13.786 | 99.99th=[ 1057] 00:35:13.786 bw ( KiB/s): min= 4096, max= 4096, per=42.66%, avg=4096.00, stdev= 0.00, samples=1 00:35:13.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:13.786 lat (usec) : 250=0.19%, 500=13.99%, 750=54.82%, 1000=27.60% 00:35:13.786 lat (msec) : 2=0.38%, 50=3.02% 00:35:13.786 cpu : usr=0.78%, sys=1.65%, ctx=530, majf=0, minf=1 00:35:13.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:13.786 job3: (groupid=0, jobs=1): err= 0: pid=1348689: Fri Nov 15 11:58:39 2024 00:35:13.786 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:35:13.786 slat (nsec): min=6834, max=63355, avg=27735.25, stdev=3495.59 00:35:13.786 clat (usec): min=456, max=1266, avg=985.15, stdev=86.98 00:35:13.786 lat (usec): min=483, max=1312, avg=1012.88, stdev=87.12 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 685], 5.00th=[ 840], 10.00th=[ 889], 20.00th=[ 930], 00:35:13.786 | 30.00th=[ 963], 40.00th=[ 979], 50.00th=[ 996], 60.00th=[ 1012], 00:35:13.786 | 70.00th=[ 1020], 80.00th=[ 1045], 90.00th=[ 1074], 95.00th=[ 1106], 00:35:13.786 | 99.00th=[ 1172], 99.50th=[ 1221], 99.90th=[ 1270], 99.95th=[ 1270], 00:35:13.786 | 99.99th=[ 1270] 00:35:13.786 write: IOPS=746, BW=2985KiB/s (3057kB/s)(2988KiB/1001msec); 0 zone resets 00:35:13.786 slat (nsec): min=8983, max=73349, avg=30213.36, stdev=9085.65 00:35:13.786 clat (usec): min=216, max=1043, avg=601.26, stdev=135.16 00:35:13.786 lat (usec): min=250, max=1077, avg=631.48, stdev=138.59 00:35:13.786 clat percentiles (usec): 00:35:13.786 | 1.00th=[ 269], 5.00th=[ 363], 10.00th=[ 420], 20.00th=[ 494], 00:35:13.786 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[ 611], 60.00th=[ 644], 00:35:13.786 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 816], 00:35:13.786 | 99.00th=[ 898], 99.50th=[ 979], 99.90th=[ 1045], 99.95th=[ 1045], 00:35:13.786 | 99.99th=[ 1045] 00:35:13.786 bw ( KiB/s): min= 4096, max= 4096, per=42.66%, avg=4096.00, stdev= 0.00, samples=1 00:35:13.786 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:35:13.786 lat (usec) : 250=0.24%, 500=12.55%, 750=40.11%, 1000=28.36% 00:35:13.786 lat (msec) : 2=18.75% 00:35:13.786 cpu : usr=3.30%, sys=4.30%, ctx=1260, majf=0, minf=2 00:35:13.786 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:13.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:13.786 issued rwts: total=512,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:13.786 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:13.786 00:35:13.786 Run status group 0 (all jobs): 00:35:13.787 READ: bw=4097KiB/s (4195kB/s), 62.4KiB/s-2046KiB/s (63.9kB/s-2095kB/s), io=4228KiB (4329kB), run=1001-1032msec 00:35:13.787 WRITE: bw=9601KiB/s (9831kB/s), 1984KiB/s-2985KiB/s (2032kB/s-3057kB/s), io=9908KiB (10.1MB), run=1001-1032msec 00:35:13.787 00:35:13.787 Disk stats (read/write): 00:35:13.787 nvme0n1: ios=516/512, merge=0/0, ticks=673/306, in_queue=979, util=99.00% 00:35:13.787 nvme0n2: ios=47/512, merge=0/0, ticks=740/324, in_queue=1064, util=97.86% 00:35:13.787 nvme0n3: ios=34/512, merge=0/0, ticks=1385/315, in_queue=1700, util=97.04% 00:35:13.787 nvme0n4: ios=489/512, merge=0/0, ticks=434/241, in_queue=675, util=89.53% 00:35:13.787 11:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:35:13.787 [global] 00:35:13.787 thread=1 00:35:13.787 invalidate=1 00:35:13.787 rw=write 00:35:13.787 time_based=1 00:35:13.787 runtime=1 00:35:13.787 ioengine=libaio 00:35:13.787 direct=1 00:35:13.787 bs=4096 00:35:13.787 iodepth=128 00:35:13.787 norandommap=0 00:35:13.787 numjobs=1 00:35:13.787 00:35:13.787 verify_dump=1 00:35:13.787 verify_backlog=512 00:35:13.787 verify_state_save=0 00:35:13.787 do_verify=1 00:35:13.787 verify=crc32c-intel 00:35:13.787 [job0] 00:35:13.787 filename=/dev/nvme0n1 00:35:13.787 [job1] 00:35:13.787 filename=/dev/nvme0n2 00:35:13.787 [job2] 00:35:13.787 filename=/dev/nvme0n3 00:35:13.787 [job3] 00:35:13.787 filename=/dev/nvme0n4 00:35:13.787 Could not set queue depth (nvme0n1) 00:35:13.787 Could not set queue depth (nvme0n2) 00:35:13.787 Could not set queue depth (nvme0n3) 00:35:13.787 Could not set queue depth (nvme0n4) 00:35:14.045 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.045 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.045 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.045 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:14.045 fio-3.35 00:35:14.045 Starting 4 threads 00:35:15.429 00:35:15.429 job0: (groupid=0, jobs=1): err= 0: pid=1349143: Fri Nov 15 11:58:40 2024 00:35:15.429 read: IOPS=4374, BW=17.1MiB/s (17.9MB/s)(17.2MiB/1006msec) 00:35:15.429 slat (nsec): min=1153, max=19726k, avg=118447.64, stdev=978198.48 00:35:15.429 clat (usec): min=1611, max=49602, avg=16083.29, stdev=8613.98 00:35:15.429 lat (usec): min=3419, max=49613, avg=16201.74, stdev=8689.03 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 5800], 5.00th=[ 7504], 10.00th=[ 8029], 20.00th=[ 9110], 00:35:15.429 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[12518], 60.00th=[16581], 00:35:15.429 | 70.00th=[20579], 80.00th=[23725], 90.00th=[28967], 95.00th=[30802], 00:35:15.429 | 99.00th=[39584], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:35:15.429 | 99.99th=[49546] 00:35:15.429 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:35:15.429 slat (nsec): min=1667, max=15025k, avg=97895.29, stdev=756193.33 00:35:15.429 clat (usec): min=1259, max=40976, avg=12267.03, stdev=6360.86 00:35:15.429 lat (usec): min=1269, max=41001, avg=12364.92, stdev=6433.23 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 3818], 5.00th=[ 6718], 10.00th=[ 8291], 20.00th=[ 8586], 00:35:15.429 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9503], 00:35:15.429 | 70.00th=[10552], 80.00th=[17695], 90.00th=[23200], 95.00th=[26084], 00:35:15.429 | 99.00th=[30016], 99.50th=[33817], 99.90th=[34341], 99.95th=[36963], 00:35:15.429 | 99.99th=[41157] 00:35:15.429 bw ( KiB/s): min=17672, max=19192, per=17.73%, avg=18432.00, stdev=1074.80, samples=2 00:35:15.429 iops : min= 4418, max= 4798, avg=4608.00, stdev=268.70, samples=2 00:35:15.429 lat (msec) : 2=0.11%, 4=0.54%, 10=52.40%, 20=21.79%, 50=25.15% 00:35:15.429 cpu : usr=3.18%, sys=5.67%, ctx=249, majf=0, minf=2 00:35:15.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:35:15.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.429 issued rwts: total=4401,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.429 job1: (groupid=0, jobs=1): err= 0: pid=1349150: Fri Nov 15 11:58:40 2024 00:35:15.429 read: IOPS=8660, BW=33.8MiB/s (35.5MB/s)(34.0MiB/1005msec) 00:35:15.429 slat (nsec): min=943, max=6591.4k, avg=58440.21, stdev=464246.31 00:35:15.429 clat (usec): min=2559, max=15354, avg=7667.92, stdev=1863.63 00:35:15.429 lat (usec): min=2566, max=17632, avg=7726.36, stdev=1897.21 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 4555], 5.00th=[ 5538], 10.00th=[ 5866], 20.00th=[ 6325], 00:35:15.429 | 30.00th=[ 6652], 40.00th=[ 6783], 50.00th=[ 6980], 60.00th=[ 7373], 00:35:15.429 | 70.00th=[ 8160], 80.00th=[ 9241], 90.00th=[10683], 95.00th=[11469], 00:35:15.429 | 99.00th=[12649], 99.50th=[13042], 99.90th=[13829], 99.95th=[13829], 00:35:15.429 | 99.99th=[15401] 00:35:15.429 write: IOPS=8749, BW=34.2MiB/s (35.8MB/s)(34.3MiB/1005msec); 0 zone resets 00:35:15.429 slat (nsec): min=1631, max=6810.8k, avg=51061.45, stdev=372768.48 00:35:15.429 clat (usec): min=1146, max=18454, avg=6908.12, stdev=2124.20 00:35:15.429 lat (usec): min=1155, max=18456, avg=6959.18, stdev=2138.55 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 3294], 5.00th=[ 4359], 10.00th=[ 4686], 20.00th=[ 5342], 00:35:15.429 | 30.00th=[ 5997], 40.00th=[ 6521], 50.00th=[ 6849], 60.00th=[ 7046], 00:35:15.429 | 70.00th=[ 7242], 80.00th=[ 7701], 90.00th=[ 9241], 95.00th=[10683], 00:35:15.429 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:35:15.429 | 99.99th=[18482] 00:35:15.429 bw ( KiB/s): min=34104, max=35528, per=33.49%, avg=34816.00, stdev=1006.92, samples=2 00:35:15.429 iops : min= 8526, max= 8882, avg=8704.00, stdev=251.73, samples=2 00:35:15.429 lat (msec) : 2=0.09%, 4=1.53%, 10=87.23%, 20=11.15% 00:35:15.429 cpu : usr=5.18%, sys=8.57%, ctx=568, majf=0, minf=1 00:35:15.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:15.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.429 issued rwts: total=8704,8793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.429 job2: (groupid=0, jobs=1): err= 0: pid=1349156: Fri Nov 15 11:58:40 2024 00:35:15.429 read: IOPS=7204, BW=28.1MiB/s (29.5MB/s)(28.4MiB/1008msec) 00:35:15.429 slat (nsec): min=976, max=15148k, avg=66889.47, stdev=565432.15 00:35:15.429 clat (usec): min=3931, max=23402, avg=9283.07, stdev=2825.06 00:35:15.429 lat (usec): min=3937, max=23410, avg=9349.96, stdev=2858.89 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 5080], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 7439], 00:35:15.429 | 30.00th=[ 7767], 40.00th=[ 8029], 50.00th=[ 8225], 60.00th=[ 8586], 00:35:15.429 | 70.00th=[ 9896], 80.00th=[11076], 90.00th=[13173], 95.00th=[14353], 00:35:15.429 | 99.00th=[21103], 99.50th=[22414], 99.90th=[23462], 99.95th=[23462], 00:35:15.429 | 99.99th=[23462] 00:35:15.429 write: IOPS=7619, BW=29.8MiB/s (31.2MB/s)(30.0MiB/1008msec); 0 zone resets 00:35:15.429 slat (nsec): min=1649, max=8190.2k, avg=62086.29, stdev=472867.92 00:35:15.429 clat (usec): min=1143, max=16599, avg=7872.98, stdev=2063.47 00:35:15.429 lat (usec): min=1155, max=16629, avg=7935.06, stdev=2084.10 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 4047], 5.00th=[ 4883], 10.00th=[ 5211], 20.00th=[ 6128], 00:35:15.429 | 30.00th=[ 6783], 40.00th=[ 7570], 50.00th=[ 7963], 60.00th=[ 8160], 00:35:15.429 | 70.00th=[ 8455], 80.00th=[ 9110], 90.00th=[10290], 95.00th=[12649], 00:35:15.429 | 99.00th=[13435], 99.50th=[14222], 99.90th=[15139], 99.95th=[16450], 00:35:15.429 | 99.99th=[16581] 00:35:15.429 bw ( KiB/s): min=28672, max=32504, per=29.42%, avg=30588.00, stdev=2709.63, samples=2 00:35:15.429 iops : min= 7168, max= 8126, avg=7647.00, stdev=677.41, samples=2 00:35:15.429 lat (msec) : 2=0.05%, 4=0.49%, 10=78.73%, 20=20.05%, 50=0.68% 00:35:15.429 cpu : usr=4.87%, sys=7.55%, ctx=477, majf=0, minf=1 00:35:15.429 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:15.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.429 issued rwts: total=7262,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.429 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.429 job3: (groupid=0, jobs=1): err= 0: pid=1349162: Fri Nov 15 11:58:40 2024 00:35:15.429 read: IOPS=4931, BW=19.3MiB/s (20.2MB/s)(19.4MiB/1005msec) 00:35:15.429 slat (nsec): min=1075, max=10841k, avg=89432.00, stdev=691395.61 00:35:15.429 clat (usec): min=2888, max=40585, avg=11973.54, stdev=4511.21 00:35:15.429 lat (usec): min=2915, max=40593, avg=12062.97, stdev=4566.81 00:35:15.429 clat percentiles (usec): 00:35:15.429 | 1.00th=[ 3982], 5.00th=[ 7177], 10.00th=[ 7767], 20.00th=[ 8455], 00:35:15.429 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11469], 60.00th=[11863], 00:35:15.429 | 70.00th=[12518], 80.00th=[14091], 90.00th=[16450], 95.00th=[19006], 00:35:15.429 | 99.00th=[32113], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:35:15.429 | 99.99th=[40633] 00:35:15.429 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:35:15.429 slat (nsec): min=1647, max=9427.1k, avg=93155.44, stdev=596322.43 00:35:15.429 clat (usec): min=1226, max=40561, avg=13280.50, stdev=8370.70 00:35:15.430 lat (usec): min=1237, max=40565, avg=13373.66, stdev=8430.93 00:35:15.430 clat percentiles (usec): 00:35:15.430 | 1.00th=[ 4424], 5.00th=[ 5211], 10.00th=[ 6063], 20.00th=[ 7898], 00:35:15.430 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10945], 00:35:15.430 | 70.00th=[12387], 80.00th=[16909], 90.00th=[29492], 95.00th=[32375], 00:35:15.430 | 99.00th=[36439], 99.50th=[36439], 99.90th=[37487], 99.95th=[37487], 00:35:15.430 | 99.99th=[40633] 00:35:15.430 bw ( KiB/s): min=16384, max=24576, per=19.70%, avg=20480.00, stdev=5792.62, samples=2 00:35:15.430 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:35:15.430 lat (msec) : 2=0.07%, 4=0.79%, 10=36.00%, 20=52.41%, 50=10.73% 00:35:15.430 cpu : usr=3.78%, sys=5.78%, ctx=374, majf=0, minf=1 00:35:15.430 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:15.430 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.430 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:15.430 issued rwts: total=4956,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.430 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:15.430 00:35:15.430 Run status group 0 (all jobs): 00:35:15.430 READ: bw=98.1MiB/s (103MB/s), 17.1MiB/s-33.8MiB/s (17.9MB/s-35.5MB/s), io=98.9MiB (104MB), run=1005-1008msec 00:35:15.430 WRITE: bw=102MiB/s (106MB/s), 17.9MiB/s-34.2MiB/s (18.8MB/s-35.8MB/s), io=102MiB (107MB), run=1005-1008msec 00:35:15.430 00:35:15.430 Disk stats (read/write): 00:35:15.430 nvme0n1: ios=3548/3584, merge=0/0, ticks=29919/25477, in_queue=55396, util=96.49% 00:35:15.430 nvme0n2: ios=7182/7273, merge=0/0, ticks=52236/48707, in_queue=100943, util=87.36% 00:35:15.430 nvme0n3: ios=6035/6144, merge=0/0, ticks=54466/46893, in_queue=101359, util=88.40% 00:35:15.430 nvme0n4: ios=4471/4608, merge=0/0, ticks=49118/51277, in_queue=100395, util=100.00% 00:35:15.430 11:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:35:15.430 [global] 00:35:15.430 thread=1 00:35:15.430 invalidate=1 00:35:15.430 rw=randwrite 00:35:15.430 time_based=1 00:35:15.430 runtime=1 00:35:15.430 ioengine=libaio 00:35:15.430 direct=1 00:35:15.430 bs=4096 00:35:15.430 iodepth=128 00:35:15.430 norandommap=0 00:35:15.430 numjobs=1 00:35:15.430 00:35:15.430 verify_dump=1 00:35:15.430 verify_backlog=512 00:35:15.430 verify_state_save=0 00:35:15.430 do_verify=1 00:35:15.430 verify=crc32c-intel 00:35:15.430 [job0] 00:35:15.430 filename=/dev/nvme0n1 00:35:15.430 [job1] 00:35:15.430 filename=/dev/nvme0n2 00:35:15.430 [job2] 00:35:15.430 filename=/dev/nvme0n3 00:35:15.430 [job3] 00:35:15.430 filename=/dev/nvme0n4 00:35:15.430 Could not set queue depth (nvme0n1) 00:35:15.430 Could not set queue depth (nvme0n2) 00:35:15.430 Could not set queue depth (nvme0n3) 00:35:15.430 Could not set queue depth (nvme0n4) 00:35:16.006 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.006 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.006 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.006 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:35:16.006 fio-3.35 00:35:16.006 Starting 4 threads 00:35:17.392 00:35:17.392 job0: (groupid=0, jobs=1): err= 0: pid=1349647: Fri Nov 15 11:58:42 2024 00:35:17.392 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:35:17.392 slat (nsec): min=903, max=16570k, avg=174946.44, stdev=1146738.66 00:35:17.392 clat (usec): min=5249, max=52655, avg=21338.39, stdev=10836.33 00:35:17.392 lat (usec): min=5255, max=52664, avg=21513.34, stdev=10893.74 00:35:17.392 clat percentiles (usec): 00:35:17.392 | 1.00th=[ 6849], 5.00th=[ 8094], 10.00th=[11731], 20.00th=[13566], 00:35:17.392 | 30.00th=[14615], 40.00th=[15795], 50.00th=[17957], 60.00th=[20317], 00:35:17.392 | 70.00th=[22414], 80.00th=[26346], 90.00th=[40633], 95.00th=[45351], 00:35:17.392 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:35:17.392 | 99.99th=[52691] 00:35:17.392 write: IOPS=3530, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1003msec); 0 zone resets 00:35:17.392 slat (nsec): min=1560, max=18736k, avg=124798.40, stdev=808770.85 00:35:17.392 clat (usec): min=2826, max=44382, avg=17350.11, stdev=9837.50 00:35:17.392 lat (usec): min=4517, max=54085, avg=17474.91, stdev=9886.23 00:35:17.392 clat percentiles (usec): 00:35:17.392 | 1.00th=[ 5538], 5.00th=[ 7963], 10.00th=[ 9110], 20.00th=[ 9896], 00:35:17.392 | 30.00th=[10421], 40.00th=[11338], 50.00th=[12256], 60.00th=[15139], 00:35:17.392 | 70.00th=[20841], 80.00th=[28181], 90.00th=[33424], 95.00th=[37487], 00:35:17.392 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:35:17.392 | 99.99th=[44303] 00:35:17.392 bw ( KiB/s): min=12288, max=15024, per=15.43%, avg=13656.00, stdev=1934.64, samples=2 00:35:17.392 iops : min= 3072, max= 3756, avg=3414.00, stdev=483.66, samples=2 00:35:17.392 lat (msec) : 4=0.02%, 10=16.57%, 20=47.97%, 50=34.98%, 100=0.47% 00:35:17.392 cpu : usr=2.30%, sys=4.19%, ctx=269, majf=0, minf=1 00:35:17.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:35:17.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.392 issued rwts: total=3072,3541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.392 job1: (groupid=0, jobs=1): err= 0: pid=1349648: Fri Nov 15 11:58:42 2024 00:35:17.392 read: IOPS=8120, BW=31.7MiB/s (33.3MB/s)(33.2MiB/1047msec) 00:35:17.392 slat (nsec): min=961, max=9674.9k, avg=54618.69, stdev=413670.01 00:35:17.392 clat (usec): min=1725, max=50379, avg=8180.71, stdev=5445.96 00:35:17.392 lat (usec): min=1737, max=52969, avg=8235.33, stdev=5458.53 00:35:17.392 clat percentiles (usec): 00:35:17.392 | 1.00th=[ 3490], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5997], 00:35:17.392 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7177], 60.00th=[ 7504], 00:35:17.392 | 70.00th=[ 8225], 80.00th=[ 9241], 90.00th=[11076], 95.00th=[12518], 00:35:17.392 | 99.00th=[47449], 99.50th=[47973], 99.90th=[50070], 99.95th=[50070], 00:35:17.392 | 99.99th=[50594] 00:35:17.392 write: IOPS=8313, BW=32.5MiB/s (34.1MB/s)(34.0MiB/1047msec); 0 zone resets 00:35:17.392 slat (nsec): min=1572, max=12653k, avg=49754.49, stdev=376566.62 00:35:17.392 clat (usec): min=279, max=21324, avg=7124.43, stdev=3325.31 00:35:17.392 lat (usec): min=313, max=21334, avg=7174.18, stdev=3342.54 00:35:17.392 clat percentiles (usec): 00:35:17.392 | 1.00th=[ 1139], 5.00th=[ 2966], 10.00th=[ 3884], 20.00th=[ 4621], 00:35:17.392 | 30.00th=[ 5211], 40.00th=[ 5997], 50.00th=[ 6718], 60.00th=[ 7111], 00:35:17.392 | 70.00th=[ 7701], 80.00th=[ 8848], 90.00th=[11863], 95.00th=[13698], 00:35:17.392 | 99.00th=[18220], 99.50th=[19268], 99.90th=[21103], 99.95th=[21365], 00:35:17.392 | 99.99th=[21365] 00:35:17.392 bw ( KiB/s): min=33232, max=36400, per=39.33%, avg=34816.00, stdev=2240.11, samples=2 00:35:17.392 iops : min= 8308, max= 9100, avg=8704.00, stdev=560.03, samples=2 00:35:17.392 lat (usec) : 500=0.03%, 750=0.12%, 1000=0.17% 00:35:17.392 lat (msec) : 2=1.24%, 4=5.50%, 10=77.87%, 20=14.11%, 50=0.77% 00:35:17.392 lat (msec) : 100=0.19% 00:35:17.392 cpu : usr=5.16%, sys=9.46%, ctx=571, majf=0, minf=1 00:35:17.392 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:35:17.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.392 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.392 issued rwts: total=8502,8704,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.392 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.392 job2: (groupid=0, jobs=1): err= 0: pid=1349653: Fri Nov 15 11:58:42 2024 00:35:17.392 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:35:17.392 slat (nsec): min=941, max=13170k, avg=84043.49, stdev=677542.97 00:35:17.392 clat (usec): min=2821, max=58572, avg=12225.75, stdev=5782.72 00:35:17.392 lat (usec): min=2831, max=58579, avg=12309.80, stdev=5823.94 00:35:17.392 clat percentiles (usec): 00:35:17.392 | 1.00th=[ 4359], 5.00th=[ 6849], 10.00th=[ 7373], 20.00th=[ 8291], 00:35:17.392 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11863], 00:35:17.392 | 70.00th=[13960], 80.00th=[15926], 90.00th=[17957], 95.00th=[20055], 00:35:17.392 | 99.00th=[29492], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:35:17.392 | 99.99th=[58459] 00:35:17.392 write: IOPS=5754, BW=22.5MiB/s (23.6MB/s)(22.7MiB/1009msec); 0 zone resets 00:35:17.392 slat (nsec): min=1577, max=8401.0k, avg=85618.47, stdev=527869.10 00:35:17.392 clat (usec): min=1001, max=35940, avg=11274.95, stdev=5828.22 00:35:17.392 lat (usec): min=1010, max=35944, avg=11360.57, stdev=5862.64 00:35:17.392 clat percentiles (usec): 00:35:17.392 | 1.00th=[ 3032], 5.00th=[ 4228], 10.00th=[ 5932], 20.00th=[ 7832], 00:35:17.392 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10552], 00:35:17.393 | 70.00th=[12780], 80.00th=[14746], 90.00th=[19268], 95.00th=[22938], 00:35:17.393 | 99.00th=[35390], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:35:17.393 | 99.99th=[35914] 00:35:17.393 bw ( KiB/s): min=18296, max=27128, per=25.66%, avg=22712.00, stdev=6245.17, samples=2 00:35:17.393 iops : min= 4574, max= 6782, avg=5678.00, stdev=1561.29, samples=2 00:35:17.393 lat (msec) : 2=0.24%, 4=2.28%, 10=50.72%, 20=38.65%, 50=7.80% 00:35:17.393 lat (msec) : 100=0.31% 00:35:17.393 cpu : usr=3.47%, sys=6.85%, ctx=422, majf=0, minf=1 00:35:17.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:35:17.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.393 issued rwts: total=5120,5806,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.393 job3: (groupid=0, jobs=1): err= 0: pid=1349656: Fri Nov 15 11:58:42 2024 00:35:17.393 read: IOPS=5028, BW=19.6MiB/s (20.6MB/s)(19.7MiB/1004msec) 00:35:17.393 slat (nsec): min=925, max=10696k, avg=97574.17, stdev=668508.87 00:35:17.393 clat (usec): min=1300, max=27137, avg=12588.82, stdev=3814.88 00:35:17.393 lat (usec): min=3079, max=27518, avg=12686.39, stdev=3867.34 00:35:17.393 clat percentiles (usec): 00:35:17.393 | 1.00th=[ 5014], 5.00th=[ 7046], 10.00th=[ 8029], 20.00th=[ 9241], 00:35:17.393 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11994], 60.00th=[13435], 00:35:17.393 | 70.00th=[13960], 80.00th=[16057], 90.00th=[18220], 95.00th=[19530], 00:35:17.393 | 99.00th=[20841], 99.50th=[22414], 99.90th=[25035], 99.95th=[26608], 00:35:17.393 | 99.99th=[27132] 00:35:17.393 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:35:17.393 slat (nsec): min=1589, max=8890.5k, avg=92434.73, stdev=556834.49 00:35:17.393 clat (usec): min=1166, max=33538, avg=12461.70, stdev=6651.50 00:35:17.393 lat (usec): min=1177, max=33545, avg=12554.13, stdev=6702.40 00:35:17.393 clat percentiles (usec): 00:35:17.393 | 1.00th=[ 2999], 5.00th=[ 4359], 10.00th=[ 6456], 20.00th=[ 7832], 00:35:17.393 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[10028], 60.00th=[11994], 00:35:17.393 | 70.00th=[14484], 80.00th=[17171], 90.00th=[22938], 95.00th=[26608], 00:35:17.393 | 99.00th=[32113], 99.50th=[32637], 99.90th=[33424], 99.95th=[33424], 00:35:17.393 | 99.99th=[33424] 00:35:17.393 bw ( KiB/s): min=19608, max=21352, per=23.14%, avg=20480.00, stdev=1233.19, samples=2 00:35:17.393 iops : min= 4902, max= 5338, avg=5120.00, stdev=308.30, samples=2 00:35:17.393 lat (msec) : 2=0.26%, 4=2.15%, 10=36.20%, 20=53.26%, 50=8.13% 00:35:17.393 cpu : usr=3.79%, sys=5.78%, ctx=456, majf=0, minf=2 00:35:17.393 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:35:17.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:17.393 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:35:17.393 issued rwts: total=5049,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:17.393 latency : target=0, window=0, percentile=100.00%, depth=128 00:35:17.393 00:35:17.393 Run status group 0 (all jobs): 00:35:17.393 READ: bw=81.1MiB/s (85.1MB/s), 12.0MiB/s-31.7MiB/s (12.5MB/s-33.3MB/s), io=84.9MiB (89.1MB), run=1003-1047msec 00:35:17.393 WRITE: bw=86.4MiB/s (90.6MB/s), 13.8MiB/s-32.5MiB/s (14.5MB/s-34.1MB/s), io=90.5MiB (94.9MB), run=1003-1047msec 00:35:17.393 00:35:17.393 Disk stats (read/write): 00:35:17.393 nvme0n1: ios=2610/2893, merge=0/0, ticks=15720/12894, in_queue=28614, util=86.57% 00:35:17.393 nvme0n2: ios=7184/7168, merge=0/0, ticks=50380/45657, in_queue=96037, util=97.45% 00:35:17.393 nvme0n3: ios=4096/5028, merge=0/0, ticks=31857/33389, in_queue=65246, util=87.24% 00:35:17.393 nvme0n4: ios=4096/4096, merge=0/0, ticks=29380/31370, in_queue=60750, util=89.42% 00:35:17.393 11:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:35:17.393 11:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:35:17.393 11:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1349985 00:35:17.393 11:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:35:17.393 [global] 00:35:17.393 thread=1 00:35:17.393 invalidate=1 00:35:17.393 rw=read 00:35:17.393 time_based=1 00:35:17.393 runtime=10 00:35:17.393 ioengine=libaio 00:35:17.393 direct=1 00:35:17.393 bs=4096 00:35:17.393 iodepth=1 00:35:17.393 norandommap=1 00:35:17.393 numjobs=1 00:35:17.393 00:35:17.393 [job0] 00:35:17.393 filename=/dev/nvme0n1 00:35:17.393 [job1] 00:35:17.393 filename=/dev/nvme0n2 00:35:17.393 [job2] 00:35:17.393 filename=/dev/nvme0n3 00:35:17.393 [job3] 00:35:17.393 filename=/dev/nvme0n4 00:35:17.393 Could not set queue depth (nvme0n1) 00:35:17.393 Could not set queue depth (nvme0n2) 00:35:17.393 Could not set queue depth (nvme0n3) 00:35:17.393 Could not set queue depth (nvme0n4) 00:35:17.654 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.654 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.654 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.654 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:35:17.654 fio-3.35 00:35:17.654 Starting 4 threads 00:35:20.196 11:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:35:20.196 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=495616, buflen=4096 00:35:20.196 fio: pid=1350173, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:20.196 11:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:35:20.456 11:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:20.456 11:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:35:20.456 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=2789376, buflen=4096 00:35:20.456 fio: pid=1350172, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:20.717 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=290816, buflen=4096 00:35:20.717 fio: pid=1350170, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:20.717 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:20.717 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:35:20.717 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=12034048, buflen=4096 00:35:20.717 fio: pid=1350171, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:35:20.978 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:20.978 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:35:20.978 00:35:20.978 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1350170: Fri Nov 15 11:58:46 2024 00:35:20.978 read: IOPS=24, BW=95.9KiB/s (98.2kB/s)(284KiB/2962msec) 00:35:20.978 slat (usec): min=24, max=2642, avg=64.91, stdev=309.15 00:35:20.978 clat (usec): min=1360, max=42131, avg=41336.54, stdev=4816.91 00:35:20.978 lat (usec): min=1387, max=42156, avg=41365.14, stdev=4816.92 00:35:20.978 clat percentiles (usec): 00:35:20.978 | 1.00th=[ 1369], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:35:20.978 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:20.978 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:20.978 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:20.978 | 99.99th=[42206] 00:35:20.978 bw ( KiB/s): min= 88, max= 96, per=1.94%, avg=94.40, stdev= 3.58, samples=5 00:35:20.978 iops : min= 22, max= 24, avg=23.60, stdev= 0.89, samples=5 00:35:20.978 lat (msec) : 2=1.39%, 50=97.22% 00:35:20.978 cpu : usr=0.03%, sys=0.03%, ctx=74, majf=0, minf=1 00:35:20.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.978 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.978 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.978 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1350171: Fri Nov 15 11:58:46 2024 00:35:20.978 read: IOPS=936, BW=3745KiB/s (3835kB/s)(11.5MiB/3138msec) 00:35:20.978 slat (usec): min=6, max=6937, avg=36.36, stdev=270.49 00:35:20.978 clat (usec): min=591, max=40963, avg=1020.06, stdev=1050.97 00:35:20.978 lat (usec): min=616, max=40988, avg=1056.43, stdev=1086.16 00:35:20.978 clat percentiles (usec): 00:35:20.978 | 1.00th=[ 750], 5.00th=[ 824], 10.00th=[ 873], 20.00th=[ 922], 00:35:20.978 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1012], 00:35:20.978 | 70.00th=[ 1029], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1156], 00:35:20.978 | 99.00th=[ 1254], 99.50th=[ 1319], 99.90th=[ 5997], 99.95th=[41157], 00:35:20.978 | 99.99th=[41157] 00:35:20.978 bw ( KiB/s): min= 3288, max= 3952, per=78.55%, avg=3816.00, stdev=259.48, samples=6 00:35:20.978 iops : min= 822, max= 988, avg=954.00, stdev=64.87, samples=6 00:35:20.978 lat (usec) : 750=0.92%, 1000=53.76% 00:35:20.978 lat (msec) : 2=45.19%, 10=0.03%, 50=0.07% 00:35:20.978 cpu : usr=0.86%, sys=2.93%, ctx=2948, majf=0, minf=2 00:35:20.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.978 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.978 issued rwts: total=2939,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.978 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1350172: Fri Nov 15 11:58:46 2024 00:35:20.978 read: IOPS=244, BW=976KiB/s (999kB/s)(2724KiB/2791msec) 00:35:20.978 slat (nsec): min=8132, max=66022, avg=27257.71, stdev=4938.40 00:35:20.978 clat (usec): min=762, max=44899, avg=4032.80, stdev=10310.87 00:35:20.978 lat (usec): min=789, max=44932, avg=4060.06, stdev=10311.17 00:35:20.978 clat percentiles (usec): 00:35:20.978 | 1.00th=[ 922], 5.00th=[ 1029], 10.00th=[ 1074], 20.00th=[ 1139], 00:35:20.978 | 30.00th=[ 1172], 40.00th=[ 1188], 50.00th=[ 1205], 60.00th=[ 1237], 00:35:20.978 | 70.00th=[ 1254], 80.00th=[ 1287], 90.00th=[ 1352], 95.00th=[41157], 00:35:20.978 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:20.978 | 99.99th=[44827] 00:35:20.978 bw ( KiB/s): min= 208, max= 1528, per=21.80%, avg=1059.20, stdev=511.71, samples=5 00:35:20.978 iops : min= 52, max= 382, avg=264.80, stdev=127.93, samples=5 00:35:20.978 lat (usec) : 1000=2.79% 00:35:20.978 lat (msec) : 2=90.03%, 50=7.04% 00:35:20.978 cpu : usr=0.36%, sys=0.68%, ctx=687, majf=0, minf=2 00:35:20.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.978 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.978 issued rwts: total=682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.978 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1350173: Fri Nov 15 11:58:46 2024 00:35:20.978 read: IOPS=46, BW=186KiB/s (191kB/s)(484KiB/2598msec) 00:35:20.978 slat (nsec): min=8344, max=59813, avg=25892.19, stdev=5375.90 00:35:20.978 clat (usec): min=673, max=42121, avg=21260.27, stdev=20394.37 00:35:20.978 lat (usec): min=733, max=42147, avg=21286.16, stdev=20394.22 00:35:20.978 clat percentiles (usec): 00:35:20.978 | 1.00th=[ 807], 5.00th=[ 963], 10.00th=[ 1057], 20.00th=[ 1139], 00:35:20.978 | 30.00th=[ 1188], 40.00th=[ 1237], 50.00th=[ 1401], 60.00th=[41681], 00:35:20.978 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:20.978 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:20.978 | 99.99th=[42206] 00:35:20.978 bw ( KiB/s): min= 96, max= 568, per=3.91%, avg=190.40, stdev=211.08, samples=5 00:35:20.978 iops : min= 24, max= 142, avg=47.60, stdev=52.77, samples=5 00:35:20.979 lat (usec) : 750=0.82%, 1000=4.92% 00:35:20.979 lat (msec) : 2=44.26%, 50=49.18% 00:35:20.979 cpu : usr=0.00%, sys=0.19%, ctx=123, majf=0, minf=2 00:35:20.979 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:20.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.979 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:20.979 issued rwts: total=122,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:20.979 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:20.979 00:35:20.979 Run status group 0 (all jobs): 00:35:20.979 READ: bw=4858KiB/s (4974kB/s), 95.9KiB/s-3745KiB/s (98.2kB/s-3835kB/s), io=14.9MiB (15.6MB), run=2598-3138msec 00:35:20.979 00:35:20.979 Disk stats (read/write): 00:35:20.979 nvme0n1: ios=68/0, merge=0/0, ticks=2810/0, in_queue=2810, util=94.79% 00:35:20.979 nvme0n2: ios=2932/0, merge=0/0, ticks=2913/0, in_queue=2913, util=94.77% 00:35:20.979 nvme0n3: ios=711/0, merge=0/0, ticks=3246/0, in_queue=3246, util=99.59% 00:35:20.979 nvme0n4: ios=122/0, merge=0/0, ticks=2584/0, in_queue=2584, util=96.35% 00:35:20.979 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:20.979 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:35:21.239 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.239 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:35:21.500 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.500 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:35:21.500 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:35:21.500 11:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1349985 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:21.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:35:21.761 nvmf hotplug test: fio failed as expected 00:35:21.761 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:22.021 rmmod nvme_tcp 00:35:22.021 rmmod nvme_fabrics 00:35:22.021 rmmod nvme_keyring 00:35:22.021 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1346711 ']' 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1346711 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 1346711 ']' 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 1346711 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1346711 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1346711' 00:35:22.281 killing process with pid 1346711 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 1346711 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 1346711 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:22.281 11:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:24.822 00:35:24.822 real 0m28.310s 00:35:24.822 user 2m21.022s 00:35:24.822 sys 0m12.446s 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:35:24.822 ************************************ 00:35:24.822 END TEST nvmf_fio_target 00:35:24.822 ************************************ 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:24.822 ************************************ 00:35:24.822 START TEST nvmf_bdevio 00:35:24.822 ************************************ 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:35:24.822 * Looking for test storage... 00:35:24.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:35:24.822 11:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:24.822 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:24.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.823 --rc genhtml_branch_coverage=1 00:35:24.823 --rc genhtml_function_coverage=1 00:35:24.823 --rc genhtml_legend=1 00:35:24.823 --rc geninfo_all_blocks=1 00:35:24.823 --rc geninfo_unexecuted_blocks=1 00:35:24.823 00:35:24.823 ' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:24.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.823 --rc genhtml_branch_coverage=1 00:35:24.823 --rc genhtml_function_coverage=1 00:35:24.823 --rc genhtml_legend=1 00:35:24.823 --rc geninfo_all_blocks=1 00:35:24.823 --rc geninfo_unexecuted_blocks=1 00:35:24.823 00:35:24.823 ' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:24.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.823 --rc genhtml_branch_coverage=1 00:35:24.823 --rc genhtml_function_coverage=1 00:35:24.823 --rc genhtml_legend=1 00:35:24.823 --rc geninfo_all_blocks=1 00:35:24.823 --rc geninfo_unexecuted_blocks=1 00:35:24.823 00:35:24.823 ' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:24.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:24.823 --rc genhtml_branch_coverage=1 00:35:24.823 --rc genhtml_function_coverage=1 00:35:24.823 --rc genhtml_legend=1 00:35:24.823 --rc geninfo_all_blocks=1 00:35:24.823 --rc geninfo_unexecuted_blocks=1 00:35:24.823 00:35:24.823 ' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:24.823 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:35:24.824 11:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:32.961 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:32.961 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:32.961 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:32.961 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:32.961 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:32.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:32.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.548 ms 00:35:32.962 00:35:32.962 --- 10.0.0.2 ping statistics --- 00:35:32.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.962 rtt min/avg/max/mdev = 0.548/0.548/0.548/0.000 ms 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:32.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:32.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:35:32.962 00:35:32.962 --- 10.0.0.1 ping statistics --- 00:35:32.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:32.962 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1355194 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1355194 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 1355194 ']' 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:32.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:32.962 11:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:32.962 [2024-11-15 11:58:57.660183] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:32.962 [2024-11-15 11:58:57.661307] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:35:32.962 [2024-11-15 11:58:57.661357] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:32.962 [2024-11-15 11:58:57.761635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:32.962 [2024-11-15 11:58:57.814593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:32.962 [2024-11-15 11:58:57.814644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:32.962 [2024-11-15 11:58:57.814652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:32.962 [2024-11-15 11:58:57.814659] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:32.962 [2024-11-15 11:58:57.814666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:32.962 [2024-11-15 11:58:57.816830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:32.962 [2024-11-15 11:58:57.816995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:32.962 [2024-11-15 11:58:57.817155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:32.962 [2024-11-15 11:58:57.817155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:32.962 [2024-11-15 11:58:57.895610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:32.962 [2024-11-15 11:58:57.896221] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:32.962 [2024-11-15 11:58:57.896782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:32.962 [2024-11-15 11:58:57.897286] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:32.962 [2024-11-15 11:58:57.897302] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.224 [2024-11-15 11:58:58.530047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.224 Malloc0 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:33.224 [2024-11-15 11:58:58.626264] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:33.224 { 00:35:33.224 "params": { 00:35:33.224 "name": "Nvme$subsystem", 00:35:33.224 "trtype": "$TEST_TRANSPORT", 00:35:33.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:33.224 "adrfam": "ipv4", 00:35:33.224 "trsvcid": "$NVMF_PORT", 00:35:33.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:33.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:33.224 "hdgst": ${hdgst:-false}, 00:35:33.224 "ddgst": ${ddgst:-false} 00:35:33.224 }, 00:35:33.224 "method": "bdev_nvme_attach_controller" 00:35:33.224 } 00:35:33.224 EOF 00:35:33.224 )") 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:35:33.224 11:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:33.224 "params": { 00:35:33.224 "name": "Nvme1", 00:35:33.224 "trtype": "tcp", 00:35:33.224 "traddr": "10.0.0.2", 00:35:33.224 "adrfam": "ipv4", 00:35:33.224 "trsvcid": "4420", 00:35:33.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:33.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:33.224 "hdgst": false, 00:35:33.224 "ddgst": false 00:35:33.224 }, 00:35:33.224 "method": "bdev_nvme_attach_controller" 00:35:33.224 }' 00:35:33.224 [2024-11-15 11:58:58.682573] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:35:33.224 [2024-11-15 11:58:58.682648] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1355539 ] 00:35:33.485 [2024-11-15 11:58:58.776418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:33.485 [2024-11-15 11:58:58.833462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.485 [2024-11-15 11:58:58.833626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:33.485 [2024-11-15 11:58:58.833673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.745 I/O targets: 00:35:33.745 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:33.745 00:35:33.745 00:35:33.745 CUnit - A unit testing framework for C - Version 2.1-3 00:35:33.745 http://cunit.sourceforge.net/ 00:35:33.745 00:35:33.745 00:35:33.745 Suite: bdevio tests on: Nvme1n1 00:35:33.745 Test: blockdev write read block ...passed 00:35:34.005 Test: blockdev write zeroes read block ...passed 00:35:34.005 Test: blockdev write zeroes read no split ...passed 00:35:34.005 Test: blockdev write zeroes read split ...passed 00:35:34.005 Test: blockdev write zeroes read split partial ...passed 00:35:34.005 Test: blockdev reset ...[2024-11-15 11:58:59.291647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:34.005 [2024-11-15 11:58:59.291748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18c8970 (9): Bad file descriptor 00:35:34.005 [2024-11-15 11:58:59.339683] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:34.005 passed 00:35:34.005 Test: blockdev write read 8 blocks ...passed 00:35:34.005 Test: blockdev write read size > 128k ...passed 00:35:34.005 Test: blockdev write read invalid size ...passed 00:35:34.005 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:34.005 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:34.005 Test: blockdev write read max offset ...passed 00:35:34.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:34.266 Test: blockdev writev readv 8 blocks ...passed 00:35:34.266 Test: blockdev writev readv 30 x 1block ...passed 00:35:34.266 Test: blockdev writev readv block ...passed 00:35:34.266 Test: blockdev writev readv size > 128k ...passed 00:35:34.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:34.266 Test: blockdev comparev and writev ...[2024-11-15 11:58:59.643982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.644030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.644048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.644057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.644701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.644717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.644731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.644740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.645374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.645387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.645402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.645410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.645998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.646011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.646025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:34.266 [2024-11-15 11:58:59.646033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:34.266 passed 00:35:34.266 Test: blockdev nvme passthru rw ...passed 00:35:34.266 Test: blockdev nvme passthru vendor specific ...[2024-11-15 11:58:59.731415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.266 [2024-11-15 11:58:59.731436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.731822] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.266 [2024-11-15 11:58:59.731834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.732259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.266 [2024-11-15 11:58:59.732270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:34.266 [2024-11-15 11:58:59.732650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:34.266 [2024-11-15 11:58:59.732663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:34.266 passed 00:35:34.266 Test: blockdev nvme admin passthru ...passed 00:35:34.526 Test: blockdev copy ...passed 00:35:34.526 00:35:34.526 Run Summary: Type Total Ran Passed Failed Inactive 00:35:34.527 suites 1 1 n/a 0 0 00:35:34.527 tests 23 23 23 0 0 00:35:34.527 asserts 152 152 152 0 n/a 00:35:34.527 00:35:34.527 Elapsed time = 1.253 seconds 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:34.527 rmmod nvme_tcp 00:35:34.527 rmmod nvme_fabrics 00:35:34.527 rmmod nvme_keyring 00:35:34.527 11:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1355194 ']' 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1355194 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 1355194 ']' 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 1355194 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:34.527 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1355194 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1355194' 00:35:34.789 killing process with pid 1355194 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 1355194 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 1355194 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:34.789 11:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.378 11:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:37.378 00:35:37.378 real 0m12.487s 00:35:37.378 user 0m10.683s 00:35:37.378 sys 0m6.676s 00:35:37.378 11:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.378 11:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:37.378 ************************************ 00:35:37.378 END TEST nvmf_bdevio 00:35:37.378 ************************************ 00:35:37.378 11:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:37.378 00:35:37.378 real 5m3.043s 00:35:37.378 user 10m19.751s 00:35:37.378 sys 2m4.826s 00:35:37.378 11:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:37.378 11:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:37.378 ************************************ 00:35:37.378 END TEST nvmf_target_core_interrupt_mode 00:35:37.378 ************************************ 00:35:37.378 11:59:02 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:37.378 11:59:02 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:35:37.378 11:59:02 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:37.378 11:59:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:37.378 ************************************ 00:35:37.378 START TEST nvmf_interrupt 00:35:37.378 ************************************ 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:37.378 * Looking for test storage... 00:35:37.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:37.378 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:37.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.379 --rc genhtml_branch_coverage=1 00:35:37.379 --rc genhtml_function_coverage=1 00:35:37.379 --rc genhtml_legend=1 00:35:37.379 --rc geninfo_all_blocks=1 00:35:37.379 --rc geninfo_unexecuted_blocks=1 00:35:37.379 00:35:37.379 ' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:37.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.379 --rc genhtml_branch_coverage=1 00:35:37.379 --rc genhtml_function_coverage=1 00:35:37.379 --rc genhtml_legend=1 00:35:37.379 --rc geninfo_all_blocks=1 00:35:37.379 --rc geninfo_unexecuted_blocks=1 00:35:37.379 00:35:37.379 ' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:37.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.379 --rc genhtml_branch_coverage=1 00:35:37.379 --rc genhtml_function_coverage=1 00:35:37.379 --rc genhtml_legend=1 00:35:37.379 --rc geninfo_all_blocks=1 00:35:37.379 --rc geninfo_unexecuted_blocks=1 00:35:37.379 00:35:37.379 ' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:37.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:37.379 --rc genhtml_branch_coverage=1 00:35:37.379 --rc genhtml_function_coverage=1 00:35:37.379 --rc genhtml_legend=1 00:35:37.379 --rc geninfo_all_blocks=1 00:35:37.379 --rc geninfo_unexecuted_blocks=1 00:35:37.379 00:35:37.379 ' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:35:37.379 11:59:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:45.521 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:45.521 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:45.521 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.521 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:45.522 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:35:45.522 00:35:45.522 --- 10.0.0.2 ping statistics --- 00:35:45.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.522 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:35:45.522 00:35:45.522 --- 10.0.0.1 ping statistics --- 00:35:45.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.522 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.522 11:59:09 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1359889 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1359889 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 1359889 ']' 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.522 [2024-11-15 11:59:10.107684] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:45.522 [2024-11-15 11:59:10.108825] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:35:45.522 [2024-11-15 11:59:10.108878] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.522 [2024-11-15 11:59:10.206901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:45.522 [2024-11-15 11:59:10.259542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.522 [2024-11-15 11:59:10.259611] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.522 [2024-11-15 11:59:10.259620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.522 [2024-11-15 11:59:10.259627] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.522 [2024-11-15 11:59:10.259633] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.522 [2024-11-15 11:59:10.261329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.522 [2024-11-15 11:59:10.261333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.522 [2024-11-15 11:59:10.339068] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:45.522 [2024-11-15 11:59:10.339756] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:45.522 [2024-11-15 11:59:10.340034] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:45.522 5000+0 records in 00:35:45.522 5000+0 records out 00:35:45.522 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0194251 s, 527 MB/s 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.522 11:59:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.784 AIO0 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.784 [2024-11-15 11:59:11.050377] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:45.784 [2024-11-15 11:59:11.094880] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1359889 0 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1359889 0 idle 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:45.784 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359889 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.32 reactor_0' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359889 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:00.32 reactor_0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1359889 1 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1359889 1 idle 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359897 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359897 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:00.00 reactor_1 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1360260 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1359889 0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1359889 0 busy 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:46.046 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359889 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.48 reactor_0' 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359889 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.48 reactor_0 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1359889 1 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1359889 1 busy 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:46.307 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:46.308 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:46.308 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:46.308 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:46.308 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:46.308 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359897 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1' 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359897 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.27 reactor_1 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:46.568 11:59:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1360260 00:35:56.571 Initializing NVMe Controllers 00:35:56.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:56.571 Controller IO queue size 256, less than required. 00:35:56.571 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:56.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:56.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:56.571 Initialization complete. Launching workers. 00:35:56.571 ======================================================== 00:35:56.571 Latency(us) 00:35:56.571 Device Information : IOPS MiB/s Average min max 00:35:56.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 18578.01 72.57 13784.76 3915.37 31673.50 00:35:56.571 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 19467.61 76.05 13151.80 8235.96 27641.05 00:35:56.571 ======================================================== 00:35:56.571 Total : 38045.62 148.62 13460.88 3915.37 31673.50 00:35:56.571 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1359889 0 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1359889 0 idle 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359889 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.30 reactor_0' 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359889 root 20 0 128.2g 44928 32256 S 6.7 0.0 0:20.30 reactor_0 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1359889 1 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1359889 1 idle 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:56.571 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:56.572 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359897 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:35:56.572 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359897 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:35:56.572 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:56.572 11:59:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:56.572 11:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:57.515 11:59:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:57.515 11:59:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:35:57.515 11:59:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:35:57.515 11:59:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:35:57.515 11:59:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1359889 0 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1359889 0 idle 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359889 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.69 reactor_0' 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359889 root 20 0 128.2g 79488 32256 S 6.7 0.1 0:20.69 reactor_0 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1359889 1 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1359889 1 idle 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1359889 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1359889 -w 256 00:35:59.430 11:59:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1359897 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.16 reactor_1' 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1359897 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.16 reactor_1 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:59.691 11:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:59.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.953 rmmod nvme_tcp 00:35:59.953 rmmod nvme_fabrics 00:35:59.953 rmmod nvme_keyring 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1359889 ']' 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1359889 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 1359889 ']' 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 1359889 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1359889 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1359889' 00:35:59.953 killing process with pid 1359889 00:35:59.953 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 1359889 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 1359889 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:36:00.214 11:59:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.756 11:59:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:02.756 00:36:02.756 real 0m25.232s 00:36:02.756 user 0m40.199s 00:36:02.756 sys 0m9.704s 00:36:02.756 11:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:02.756 11:59:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:36:02.756 ************************************ 00:36:02.756 END TEST nvmf_interrupt 00:36:02.756 ************************************ 00:36:02.756 00:36:02.756 real 30m15.547s 00:36:02.756 user 62m0.235s 00:36:02.756 sys 10m17.954s 00:36:02.756 11:59:27 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:02.756 11:59:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.756 ************************************ 00:36:02.756 END TEST nvmf_tcp 00:36:02.756 ************************************ 00:36:02.756 11:59:27 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:36:02.756 11:59:27 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:02.756 11:59:27 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:02.756 11:59:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:02.756 11:59:27 -- common/autotest_common.sh@10 -- # set +x 00:36:02.756 ************************************ 00:36:02.756 START TEST spdkcli_nvmf_tcp 00:36:02.756 ************************************ 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:36:02.756 * Looking for test storage... 00:36:02.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.756 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.757 11:59:27 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:02.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.757 --rc genhtml_branch_coverage=1 00:36:02.757 --rc genhtml_function_coverage=1 00:36:02.757 --rc genhtml_legend=1 00:36:02.757 --rc geninfo_all_blocks=1 00:36:02.757 --rc geninfo_unexecuted_blocks=1 00:36:02.757 00:36:02.757 ' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:02.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.757 --rc genhtml_branch_coverage=1 00:36:02.757 --rc genhtml_function_coverage=1 00:36:02.757 --rc genhtml_legend=1 00:36:02.757 --rc geninfo_all_blocks=1 00:36:02.757 --rc geninfo_unexecuted_blocks=1 00:36:02.757 00:36:02.757 ' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:02.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.757 --rc genhtml_branch_coverage=1 00:36:02.757 --rc genhtml_function_coverage=1 00:36:02.757 --rc genhtml_legend=1 00:36:02.757 --rc geninfo_all_blocks=1 00:36:02.757 --rc geninfo_unexecuted_blocks=1 00:36:02.757 00:36:02.757 ' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:02.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.757 --rc genhtml_branch_coverage=1 00:36:02.757 --rc genhtml_function_coverage=1 00:36:02.757 --rc genhtml_legend=1 00:36:02.757 --rc geninfo_all_blocks=1 00:36:02.757 --rc geninfo_unexecuted_blocks=1 00:36:02.757 00:36:02.757 ' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:02.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1363441 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1363441 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 1363441 ']' 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:02.757 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:02.757 [2024-11-15 11:59:28.116534] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:36:02.757 [2024-11-15 11:59:28.116626] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1363441 ] 00:36:02.757 [2024-11-15 11:59:28.207270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:03.017 [2024-11-15 11:59:28.260929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:03.017 [2024-11-15 11:59:28.260936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:03.587 11:59:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:36:03.587 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:36:03.587 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:36:03.587 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:36:03.587 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:36:03.587 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:36:03.587 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:36:03.588 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:03.588 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:03.588 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:36:03.588 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:36:03.588 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:36:03.588 ' 00:36:06.885 [2024-11-15 11:59:31.729318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.825 [2024-11-15 11:59:33.093535] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:36:10.366 [2024-11-15 11:59:35.628502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:36:12.908 [2024-11-15 11:59:37.850844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:36:14.299 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:36:14.299 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:36:14.299 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:36:14.299 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:36:14.299 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:36:14.299 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:36:14.299 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:36:14.299 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:14.299 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:14.299 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:36:14.299 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:36:14.299 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:36:14.299 11:59:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:14.870 11:59:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:36:14.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:36:14.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:14.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:36:14.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:36:14.870 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:36:14.870 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:36:14.870 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:36:14.870 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:36:14.870 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:36:14.870 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:36:14.870 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:36:14.870 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:36:14.870 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:36:14.870 ' 00:36:21.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:36:21.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:36:21.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:21.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:36:21.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:36:21.448 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:36:21.448 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:36:21.448 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:36:21.448 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:36:21.448 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:36:21.448 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:36:21.448 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:36:21.448 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:36:21.448 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1363441 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1363441 ']' 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1363441 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1363441 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1363441' 00:36:21.448 killing process with pid 1363441 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 1363441 00:36:21.448 11:59:45 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 1363441 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1363441 ']' 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1363441 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 1363441 ']' 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 1363441 00:36:21.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1363441) - No such process 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 1363441 is not found' 00:36:21.448 Process with pid 1363441 is not found 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:36:21.448 00:36:21.448 real 0m18.209s 00:36:21.448 user 0m40.438s 00:36:21.448 sys 0m0.934s 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:21.448 11:59:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:21.448 ************************************ 00:36:21.449 END TEST spdkcli_nvmf_tcp 00:36:21.449 ************************************ 00:36:21.449 11:59:46 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:21.449 11:59:46 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:21.449 11:59:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:21.449 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:36:21.449 ************************************ 00:36:21.449 START TEST nvmf_identify_passthru 00:36:21.449 ************************************ 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:36:21.449 * Looking for test storage... 00:36:21.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:21.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.449 --rc genhtml_branch_coverage=1 00:36:21.449 --rc genhtml_function_coverage=1 00:36:21.449 --rc genhtml_legend=1 00:36:21.449 --rc geninfo_all_blocks=1 00:36:21.449 --rc geninfo_unexecuted_blocks=1 00:36:21.449 00:36:21.449 ' 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:21.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.449 --rc genhtml_branch_coverage=1 00:36:21.449 --rc genhtml_function_coverage=1 00:36:21.449 --rc genhtml_legend=1 00:36:21.449 --rc geninfo_all_blocks=1 00:36:21.449 --rc geninfo_unexecuted_blocks=1 00:36:21.449 00:36:21.449 ' 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:21.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.449 --rc genhtml_branch_coverage=1 00:36:21.449 --rc genhtml_function_coverage=1 00:36:21.449 --rc genhtml_legend=1 00:36:21.449 --rc geninfo_all_blocks=1 00:36:21.449 --rc geninfo_unexecuted_blocks=1 00:36:21.449 00:36:21.449 ' 00:36:21.449 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:21.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.449 --rc genhtml_branch_coverage=1 00:36:21.449 --rc genhtml_function_coverage=1 00:36:21.449 --rc genhtml_legend=1 00:36:21.449 --rc geninfo_all_blocks=1 00:36:21.449 --rc geninfo_unexecuted_blocks=1 00:36:21.449 00:36:21.449 ' 00:36:21.449 11:59:46 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.449 11:59:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.449 11:59:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.449 11:59:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.449 11:59:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:21.449 11:59:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:21.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.449 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.449 11:59:46 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.449 11:59:46 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.449 11:59:46 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.450 11:59:46 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.450 11:59:46 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.450 11:59:46 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:36:21.450 11:59:46 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.450 11:59:46 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.450 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:21.450 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:21.450 11:59:46 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:36:21.450 11:59:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:28.030 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:28.030 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:28.030 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:28.031 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:28.031 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:28.031 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:28.291 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:28.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:28.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.488 ms 00:36:28.292 00:36:28.292 --- 10.0.0.2 ping statistics --- 00:36:28.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.292 rtt min/avg/max/mdev = 0.488/0.488/0.488/0.000 ms 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:28.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:28.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:36:28.292 00:36:28.292 --- 10.0.0.1 ping statistics --- 00:36:28.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:28.292 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:28.292 11:59:53 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:28.292 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:28.292 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:36:28.292 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:36:28.552 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:36:28.552 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:36:28.552 11:59:53 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:36:28.552 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:36:28.552 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:36:28.552 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:28.552 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:28.552 11:59:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:36:29.121 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:36:29.121 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:36:29.121 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:29.121 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:29.391 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:36:29.391 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:29.391 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:29.391 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:29.650 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:29.650 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1370854 00:36:29.650 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:29.650 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:29.650 11:59:54 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1370854 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 1370854 ']' 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:29.650 11:59:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:29.650 [2024-11-15 11:59:54.950058] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:36:29.650 [2024-11-15 11:59:54.950108] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.650 [2024-11-15 11:59:55.044338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:29.650 [2024-11-15 11:59:55.081779] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.650 [2024-11-15 11:59:55.081813] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.650 [2024-11-15 11:59:55.081821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.650 [2024-11-15 11:59:55.081828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.650 [2024-11-15 11:59:55.081834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.650 [2024-11-15 11:59:55.083376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.650 [2024-11-15 11:59:55.083535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:29.650 [2024-11-15 11:59:55.083677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.650 [2024-11-15 11:59:55.083823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:36:30.593 11:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.593 INFO: Log level set to 20 00:36:30.593 INFO: Requests: 00:36:30.593 { 00:36:30.593 "jsonrpc": "2.0", 00:36:30.593 "method": "nvmf_set_config", 00:36:30.593 "id": 1, 00:36:30.593 "params": { 00:36:30.593 "admin_cmd_passthru": { 00:36:30.593 "identify_ctrlr": true 00:36:30.593 } 00:36:30.593 } 00:36:30.593 } 00:36:30.593 00:36:30.593 INFO: response: 00:36:30.593 { 00:36:30.593 "jsonrpc": "2.0", 00:36:30.593 "id": 1, 00:36:30.593 "result": true 00:36:30.593 } 00:36:30.593 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.593 11:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.593 INFO: Setting log level to 20 00:36:30.593 INFO: Setting log level to 20 00:36:30.593 INFO: Log level set to 20 00:36:30.593 INFO: Log level set to 20 00:36:30.593 INFO: Requests: 00:36:30.593 { 00:36:30.593 "jsonrpc": "2.0", 00:36:30.593 "method": "framework_start_init", 00:36:30.593 "id": 1 00:36:30.593 } 00:36:30.593 00:36:30.593 INFO: Requests: 00:36:30.593 { 00:36:30.593 "jsonrpc": "2.0", 00:36:30.593 "method": "framework_start_init", 00:36:30.593 "id": 1 00:36:30.593 } 00:36:30.593 00:36:30.593 [2024-11-15 11:59:55.855339] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:30.593 INFO: response: 00:36:30.593 { 00:36:30.593 "jsonrpc": "2.0", 00:36:30.593 "id": 1, 00:36:30.593 "result": true 00:36:30.593 } 00:36:30.593 00:36:30.593 INFO: response: 00:36:30.593 { 00:36:30.593 "jsonrpc": "2.0", 00:36:30.593 "id": 1, 00:36:30.593 "result": true 00:36:30.593 } 00:36:30.593 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.593 11:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.593 INFO: Setting log level to 40 00:36:30.593 INFO: Setting log level to 40 00:36:30.593 INFO: Setting log level to 40 00:36:30.593 [2024-11-15 11:59:55.869135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.593 11:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.593 11:59:55 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.593 11:59:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.854 Nvme0n1 00:36:30.854 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.855 [2024-11-15 11:59:56.285382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:30.855 [ 00:36:30.855 { 00:36:30.855 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:30.855 "subtype": "Discovery", 00:36:30.855 "listen_addresses": [], 00:36:30.855 "allow_any_host": true, 00:36:30.855 "hosts": [] 00:36:30.855 }, 00:36:30.855 { 00:36:30.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:30.855 "subtype": "NVMe", 00:36:30.855 "listen_addresses": [ 00:36:30.855 { 00:36:30.855 "trtype": "TCP", 00:36:30.855 "adrfam": "IPv4", 00:36:30.855 "traddr": "10.0.0.2", 00:36:30.855 "trsvcid": "4420" 00:36:30.855 } 00:36:30.855 ], 00:36:30.855 "allow_any_host": true, 00:36:30.855 "hosts": [], 00:36:30.855 "serial_number": "SPDK00000000000001", 00:36:30.855 "model_number": "SPDK bdev Controller", 00:36:30.855 "max_namespaces": 1, 00:36:30.855 "min_cntlid": 1, 00:36:30.855 "max_cntlid": 65519, 00:36:30.855 "namespaces": [ 00:36:30.855 { 00:36:30.855 "nsid": 1, 00:36:30.855 "bdev_name": "Nvme0n1", 00:36:30.855 "name": "Nvme0n1", 00:36:30.855 "nguid": "36344730526054870025384500000044", 00:36:30.855 "uuid": "36344730-5260-5487-0025-384500000044" 00:36:30.855 } 00:36:30.855 ] 00:36:30.855 } 00:36:30.855 ] 00:36:30.855 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:30.855 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:31.116 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:36:31.117 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:31.117 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:31.117 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:31.377 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:36:31.377 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:36:31.377 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:36:31.377 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:31.377 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:31.377 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:31.377 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:31.377 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:31.377 11:59:56 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:31.377 rmmod nvme_tcp 00:36:31.377 rmmod nvme_fabrics 00:36:31.377 rmmod nvme_keyring 00:36:31.377 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:31.637 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:36:31.637 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:36:31.637 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1370854 ']' 00:36:31.637 11:59:56 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1370854 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 1370854 ']' 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 1370854 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1370854 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1370854' 00:36:31.637 killing process with pid 1370854 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 1370854 00:36:31.637 11:59:56 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 1370854 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:31.896 11:59:57 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.896 11:59:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:31.897 11:59:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.807 11:59:59 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:33.807 00:36:33.807 real 0m13.178s 00:36:33.807 user 0m10.719s 00:36:33.807 sys 0m6.590s 00:36:33.807 11:59:59 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:33.807 11:59:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:33.807 ************************************ 00:36:33.807 END TEST nvmf_identify_passthru 00:36:33.807 ************************************ 00:36:34.069 11:59:59 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:34.069 11:59:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:34.069 11:59:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:34.069 11:59:59 -- common/autotest_common.sh@10 -- # set +x 00:36:34.069 ************************************ 00:36:34.069 START TEST nvmf_dif 00:36:34.069 ************************************ 00:36:34.069 11:59:59 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:34.069 * Looking for test storage... 00:36:34.069 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:34.069 11:59:59 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:34.069 11:59:59 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:36:34.069 11:59:59 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:34.069 11:59:59 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:34.069 11:59:59 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.331 --rc genhtml_branch_coverage=1 00:36:34.331 --rc genhtml_function_coverage=1 00:36:34.331 --rc genhtml_legend=1 00:36:34.331 --rc geninfo_all_blocks=1 00:36:34.331 --rc geninfo_unexecuted_blocks=1 00:36:34.331 00:36:34.331 ' 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.331 --rc genhtml_branch_coverage=1 00:36:34.331 --rc genhtml_function_coverage=1 00:36:34.331 --rc genhtml_legend=1 00:36:34.331 --rc geninfo_all_blocks=1 00:36:34.331 --rc geninfo_unexecuted_blocks=1 00:36:34.331 00:36:34.331 ' 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.331 --rc genhtml_branch_coverage=1 00:36:34.331 --rc genhtml_function_coverage=1 00:36:34.331 --rc genhtml_legend=1 00:36:34.331 --rc geninfo_all_blocks=1 00:36:34.331 --rc geninfo_unexecuted_blocks=1 00:36:34.331 00:36:34.331 ' 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:34.331 --rc genhtml_branch_coverage=1 00:36:34.331 --rc genhtml_function_coverage=1 00:36:34.331 --rc genhtml_legend=1 00:36:34.331 --rc geninfo_all_blocks=1 00:36:34.331 --rc geninfo_unexecuted_blocks=1 00:36:34.331 00:36:34.331 ' 00:36:34.331 11:59:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:34.331 11:59:59 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:34.331 11:59:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.331 11:59:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.331 11:59:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.331 11:59:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:34.331 11:59:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:34.331 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:34.331 11:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:34.331 11:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:34.331 11:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:34.331 11:59:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:34.331 11:59:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:34.331 11:59:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:34.331 11:59:59 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:34.332 11:59:59 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:36:34.332 11:59:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:42.470 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:42.470 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:42.470 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:42.470 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:42.470 12:00:06 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:42.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:42.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.452 ms 00:36:42.471 00:36:42.471 --- 10.0.0.2 ping statistics --- 00:36:42.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.471 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:42.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:42.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.313 ms 00:36:42.471 00:36:42.471 --- 10.0.0.1 ping statistics --- 00:36:42.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:42.471 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:42.471 12:00:06 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:45.018 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:36:45.018 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:36:45.018 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:36:45.280 12:00:10 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:45.280 12:00:10 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:45.280 12:00:10 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:45.280 12:00:10 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:45.280 12:00:10 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:45.280 12:00:10 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:45.542 12:00:10 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:45.542 12:00:10 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:45.542 12:00:10 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:45.542 12:00:10 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1377303 00:36:45.542 12:00:10 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1377303 00:36:45.542 12:00:10 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 1377303 ']' 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:45.542 12:00:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:45.542 [2024-11-15 12:00:10.875117] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:36:45.542 [2024-11-15 12:00:10.875184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:45.542 [2024-11-15 12:00:10.973172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:45.542 [2024-11-15 12:00:11.025822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:45.542 [2024-11-15 12:00:11.025870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:45.542 [2024-11-15 12:00:11.025879] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:45.542 [2024-11-15 12:00:11.025886] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:45.542 [2024-11-15 12:00:11.025892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:45.542 [2024-11-15 12:00:11.026731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:36:46.485 12:00:11 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 12:00:11 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:46.485 12:00:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:46.485 12:00:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 [2024-11-15 12:00:11.728168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.485 12:00:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 ************************************ 00:36:46.485 START TEST fio_dif_1_default 00:36:46.485 ************************************ 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 bdev_null0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:46.485 [2024-11-15 12:00:11.816615] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:46.485 { 00:36:46.485 "params": { 00:36:46.485 "name": "Nvme$subsystem", 00:36:46.485 "trtype": "$TEST_TRANSPORT", 00:36:46.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:46.485 "adrfam": "ipv4", 00:36:46.485 "trsvcid": "$NVMF_PORT", 00:36:46.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:46.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:46.485 "hdgst": ${hdgst:-false}, 00:36:46.485 "ddgst": ${ddgst:-false} 00:36:46.485 }, 00:36:46.485 "method": "bdev_nvme_attach_controller" 00:36:46.485 } 00:36:46.485 EOF 00:36:46.485 )") 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:36:46.485 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:46.486 "params": { 00:36:46.486 "name": "Nvme0", 00:36:46.486 "trtype": "tcp", 00:36:46.486 "traddr": "10.0.0.2", 00:36:46.486 "adrfam": "ipv4", 00:36:46.486 "trsvcid": "4420", 00:36:46.486 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:46.486 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:46.486 "hdgst": false, 00:36:46.486 "ddgst": false 00:36:46.486 }, 00:36:46.486 "method": "bdev_nvme_attach_controller" 00:36:46.486 }' 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:46.486 12:00:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.067 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:47.067 fio-3.35 00:36:47.067 Starting 1 thread 00:36:59.302 00:36:59.302 filename0: (groupid=0, jobs=1): err= 0: pid=1377980: Fri Nov 15 12:00:22 2024 00:36:59.302 read: IOPS=190, BW=761KiB/s (779kB/s)(7632KiB/10033msec) 00:36:59.302 slat (nsec): min=5485, max=32247, avg=6389.60, stdev=1437.97 00:36:59.302 clat (usec): min=610, max=45428, avg=21015.19, stdev=20185.58 00:36:59.302 lat (usec): min=616, max=45460, avg=21021.58, stdev=20185.58 00:36:59.302 clat percentiles (usec): 00:36:59.302 | 1.00th=[ 644], 5.00th=[ 783], 10.00th=[ 840], 20.00th=[ 865], 00:36:59.302 | 30.00th=[ 898], 40.00th=[ 938], 50.00th=[ 1123], 60.00th=[41157], 00:36:59.302 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:36:59.302 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:36:59.302 | 99.99th=[45351] 00:36:59.302 bw ( KiB/s): min= 704, max= 768, per=100.00%, avg=761.60, stdev=16.74, samples=20 00:36:59.302 iops : min= 176, max= 192, avg=190.40, stdev= 4.19, samples=20 00:36:59.302 lat (usec) : 750=2.46%, 1000=45.49% 00:36:59.302 lat (msec) : 2=2.15%, 50=49.90% 00:36:59.302 cpu : usr=93.24%, sys=6.55%, ctx=17, majf=0, minf=230 00:36:59.302 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:59.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:59.302 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:59.302 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:59.302 00:36:59.302 Run status group 0 (all jobs): 00:36:59.302 READ: bw=761KiB/s (779kB/s), 761KiB/s-761KiB/s (779kB/s-779kB/s), io=7632KiB (7815kB), run=10033-10033msec 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.302 00:36:59.302 real 0m11.298s 00:36:59.302 user 0m28.386s 00:36:59.302 sys 0m0.996s 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 ************************************ 00:36:59.302 END TEST fio_dif_1_default 00:36:59.302 ************************************ 00:36:59.302 12:00:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:59.302 12:00:23 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:59.302 12:00:23 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 ************************************ 00:36:59.302 START TEST fio_dif_1_multi_subsystems 00:36:59.302 ************************************ 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 bdev_null0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.302 [2024-11-15 12:00:23.197141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:59.302 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.303 bdev_null1 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.303 { 00:36:59.303 "params": { 00:36:59.303 "name": "Nvme$subsystem", 00:36:59.303 "trtype": "$TEST_TRANSPORT", 00:36:59.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.303 "adrfam": "ipv4", 00:36:59.303 "trsvcid": "$NVMF_PORT", 00:36:59.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.303 "hdgst": ${hdgst:-false}, 00:36:59.303 "ddgst": ${ddgst:-false} 00:36:59.303 }, 00:36:59.303 "method": "bdev_nvme_attach_controller" 00:36:59.303 } 00:36:59.303 EOF 00:36:59.303 )") 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:59.303 { 00:36:59.303 "params": { 00:36:59.303 "name": "Nvme$subsystem", 00:36:59.303 "trtype": "$TEST_TRANSPORT", 00:36:59.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:59.303 "adrfam": "ipv4", 00:36:59.303 "trsvcid": "$NVMF_PORT", 00:36:59.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:59.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:59.303 "hdgst": ${hdgst:-false}, 00:36:59.303 "ddgst": ${ddgst:-false} 00:36:59.303 }, 00:36:59.303 "method": "bdev_nvme_attach_controller" 00:36:59.303 } 00:36:59.303 EOF 00:36:59.303 )") 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:59.303 "params": { 00:36:59.303 "name": "Nvme0", 00:36:59.303 "trtype": "tcp", 00:36:59.303 "traddr": "10.0.0.2", 00:36:59.303 "adrfam": "ipv4", 00:36:59.303 "trsvcid": "4420", 00:36:59.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:59.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:59.303 "hdgst": false, 00:36:59.303 "ddgst": false 00:36:59.303 }, 00:36:59.303 "method": "bdev_nvme_attach_controller" 00:36:59.303 },{ 00:36:59.303 "params": { 00:36:59.303 "name": "Nvme1", 00:36:59.303 "trtype": "tcp", 00:36:59.303 "traddr": "10.0.0.2", 00:36:59.303 "adrfam": "ipv4", 00:36:59.303 "trsvcid": "4420", 00:36:59.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:59.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:59.303 "hdgst": false, 00:36:59.303 "ddgst": false 00:36:59.303 }, 00:36:59.303 "method": "bdev_nvme_attach_controller" 00:36:59.303 }' 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:59.303 12:00:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:59.303 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:59.303 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:59.303 fio-3.35 00:36:59.303 Starting 2 threads 00:37:09.289 00:37:09.289 filename0: (groupid=0, jobs=1): err= 0: pid=1380325: Fri Nov 15 12:00:34 2024 00:37:09.289 read: IOPS=190, BW=762KiB/s (780kB/s)(7616KiB/10001msec) 00:37:09.289 slat (nsec): min=5489, max=32074, avg=6415.52, stdev=1618.83 00:37:09.289 clat (usec): min=627, max=42325, avg=20992.84, stdev=20128.05 00:37:09.289 lat (usec): min=635, max=42331, avg=20999.26, stdev=20127.96 00:37:09.289 clat percentiles (usec): 00:37:09.290 | 1.00th=[ 717], 5.00th=[ 791], 10.00th=[ 840], 20.00th=[ 865], 00:37:09.290 | 30.00th=[ 889], 40.00th=[ 914], 50.00th=[ 1106], 60.00th=[41157], 00:37:09.290 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:37:09.290 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:37:09.290 | 99.99th=[42206] 00:37:09.290 bw ( KiB/s): min= 702, max= 768, per=66.14%, avg=761.16, stdev=20.50, samples=19 00:37:09.290 iops : min= 175, max= 192, avg=190.26, stdev= 5.21, samples=19 00:37:09.290 lat (usec) : 750=1.68%, 1000=47.69% 00:37:09.290 lat (msec) : 2=0.63%, 50=50.00% 00:37:09.290 cpu : usr=96.10%, sys=3.70%, ctx=13, majf=0, minf=201 00:37:09.290 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.290 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.290 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:09.290 filename1: (groupid=0, jobs=1): err= 0: pid=1380326: Fri Nov 15 12:00:34 2024 00:37:09.290 read: IOPS=97, BW=391KiB/s (400kB/s)(3920KiB/10026msec) 00:37:09.290 slat (nsec): min=5496, max=31920, avg=5983.37, stdev=1946.19 00:37:09.290 clat (usec): min=1867, max=42891, avg=40905.53, stdev=2515.23 00:37:09.290 lat (usec): min=1879, max=42917, avg=40911.51, stdev=2514.60 00:37:09.290 clat percentiles (usec): 00:37:09.290 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:37:09.290 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:37:09.290 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:37:09.290 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:37:09.290 | 99.99th=[42730] 00:37:09.290 bw ( KiB/s): min= 384, max= 416, per=33.90%, avg=390.40, stdev=13.13, samples=20 00:37:09.290 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:37:09.290 lat (msec) : 2=0.41%, 50=99.59% 00:37:09.290 cpu : usr=95.91%, sys=3.90%, ctx=13, majf=0, minf=59 00:37:09.290 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:09.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:09.290 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:09.290 latency : target=0, window=0, percentile=100.00%, depth=4 00:37:09.290 00:37:09.290 Run status group 0 (all jobs): 00:37:09.290 READ: bw=1151KiB/s (1178kB/s), 391KiB/s-762KiB/s (400kB/s-780kB/s), io=11.3MiB (11.8MB), run=10001-10026msec 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 00:37:09.290 real 0m11.436s 00:37:09.290 user 0m31.635s 00:37:09.290 sys 0m1.093s 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 ************************************ 00:37:09.290 END TEST fio_dif_1_multi_subsystems 00:37:09.290 ************************************ 00:37:09.290 12:00:34 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:37:09.290 12:00:34 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:09.290 12:00:34 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 ************************************ 00:37:09.290 START TEST fio_dif_rand_params 00:37:09.290 ************************************ 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 bdev_null0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:09.290 [2024-11-15 12:00:34.714824] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:09.290 { 00:37:09.290 "params": { 00:37:09.290 "name": "Nvme$subsystem", 00:37:09.290 "trtype": "$TEST_TRANSPORT", 00:37:09.290 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:09.290 "adrfam": "ipv4", 00:37:09.290 "trsvcid": "$NVMF_PORT", 00:37:09.290 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:09.290 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:09.290 "hdgst": ${hdgst:-false}, 00:37:09.290 "ddgst": ${ddgst:-false} 00:37:09.290 }, 00:37:09.290 "method": "bdev_nvme_attach_controller" 00:37:09.290 } 00:37:09.290 EOF 00:37:09.290 )") 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:09.290 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:09.291 "params": { 00:37:09.291 "name": "Nvme0", 00:37:09.291 "trtype": "tcp", 00:37:09.291 "traddr": "10.0.0.2", 00:37:09.291 "adrfam": "ipv4", 00:37:09.291 "trsvcid": "4420", 00:37:09.291 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:09.291 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:09.291 "hdgst": false, 00:37:09.291 "ddgst": false 00:37:09.291 }, 00:37:09.291 "method": "bdev_nvme_attach_controller" 00:37:09.291 }' 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:09.291 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:09.587 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:09.587 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:09.587 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:09.587 12:00:34 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:09.853 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:09.853 ... 00:37:09.853 fio-3.35 00:37:09.853 Starting 3 threads 00:37:16.438 00:37:16.438 filename0: (groupid=0, jobs=1): err= 0: pid=1382526: Fri Nov 15 12:00:40 2024 00:37:16.438 read: IOPS=315, BW=39.5MiB/s (41.4MB/s)(199MiB/5044msec) 00:37:16.438 slat (nsec): min=5563, max=43977, avg=8535.51, stdev=2059.96 00:37:16.438 clat (usec): min=4665, max=48566, avg=9462.42, stdev=3332.65 00:37:16.438 lat (usec): min=4672, max=48575, avg=9470.95, stdev=3332.64 00:37:16.438 clat percentiles (usec): 00:37:16.438 | 1.00th=[ 5800], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[ 8586], 00:37:16.438 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9503], 00:37:16.438 | 70.00th=[ 9765], 80.00th=[ 9896], 90.00th=[10159], 95.00th=[10421], 00:37:16.438 | 99.00th=[11469], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:37:16.438 | 99.99th=[48497] 00:37:16.438 bw ( KiB/s): min=37632, max=43776, per=34.29%, avg=40729.60, stdev=1771.36, samples=10 00:37:16.438 iops : min= 294, max= 342, avg=318.20, stdev=13.84, samples=10 00:37:16.438 lat (msec) : 10=82.74%, 20=16.57%, 50=0.69% 00:37:16.438 cpu : usr=95.14%, sys=4.62%, ctx=9, majf=0, minf=59 00:37:16.438 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.438 issued rwts: total=1593,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.438 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:16.438 filename0: (groupid=0, jobs=1): err= 0: pid=1382527: Fri Nov 15 12:00:40 2024 00:37:16.438 read: IOPS=312, BW=39.0MiB/s (40.9MB/s)(195MiB/5005msec) 00:37:16.438 slat (nsec): min=8110, max=43249, avg=9096.77, stdev=1750.12 00:37:16.438 clat (usec): min=5342, max=89231, avg=9599.90, stdev=2988.20 00:37:16.438 lat (usec): min=5351, max=89240, avg=9609.00, stdev=2988.27 00:37:16.438 clat percentiles (usec): 00:37:16.438 | 1.00th=[ 6456], 5.00th=[ 7570], 10.00th=[ 7963], 20.00th=[ 8717], 00:37:16.438 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:37:16.438 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10945], 00:37:16.438 | 99.00th=[11600], 99.50th=[14222], 99.90th=[49021], 99.95th=[89654], 00:37:16.438 | 99.99th=[89654] 00:37:16.438 bw ( KiB/s): min=36352, max=42240, per=33.63%, avg=39936.00, stdev=1521.71, samples=10 00:37:16.438 iops : min= 284, max= 330, avg=312.00, stdev=11.89, samples=10 00:37:16.438 lat (msec) : 10=72.54%, 20=27.14%, 50=0.26%, 100=0.06% 00:37:16.438 cpu : usr=94.16%, sys=5.56%, ctx=7, majf=0, minf=124 00:37:16.438 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.438 issued rwts: total=1562,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.438 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:16.438 filename0: (groupid=0, jobs=1): err= 0: pid=1382528: Fri Nov 15 12:00:40 2024 00:37:16.438 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(191MiB/5045msec) 00:37:16.438 slat (nsec): min=5610, max=31850, avg=8351.89, stdev=2035.07 00:37:16.438 clat (usec): min=6854, max=48436, avg=9880.37, stdev=1621.22 00:37:16.438 lat (usec): min=6862, max=48442, avg=9888.72, stdev=1621.19 00:37:16.438 clat percentiles (usec): 00:37:16.438 | 1.00th=[ 7373], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9241], 00:37:16.438 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:37:16.438 | 70.00th=[10290], 80.00th=[10552], 90.00th=[10814], 95.00th=[11207], 00:37:16.438 | 99.00th=[11731], 99.50th=[12387], 99.90th=[46400], 99.95th=[48497], 00:37:16.438 | 99.99th=[48497] 00:37:16.438 bw ( KiB/s): min=37888, max=40192, per=32.85%, avg=39014.40, stdev=802.31, samples=10 00:37:16.438 iops : min= 296, max= 314, avg=304.80, stdev= 6.27, samples=10 00:37:16.438 lat (msec) : 10=56.62%, 20=43.25%, 50=0.13% 00:37:16.438 cpu : usr=94.39%, sys=5.33%, ctx=7, majf=0, minf=100 00:37:16.438 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:16.438 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.438 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:16.438 issued rwts: total=1526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:16.438 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:16.438 00:37:16.438 Run status group 0 (all jobs): 00:37:16.438 READ: bw=116MiB/s (122MB/s), 37.8MiB/s-39.5MiB/s (39.6MB/s-41.4MB/s), io=585MiB (614MB), run=5005-5045msec 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 bdev_null0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 [2024-11-15 12:00:40.983372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 bdev_null1 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 bdev_null2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:16.439 { 00:37:16.439 "params": { 00:37:16.439 "name": "Nvme$subsystem", 00:37:16.439 "trtype": "$TEST_TRANSPORT", 00:37:16.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.439 "adrfam": "ipv4", 00:37:16.439 "trsvcid": "$NVMF_PORT", 00:37:16.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.439 "hdgst": ${hdgst:-false}, 00:37:16.439 "ddgst": ${ddgst:-false} 00:37:16.439 }, 00:37:16.439 "method": "bdev_nvme_attach_controller" 00:37:16.439 } 00:37:16.439 EOF 00:37:16.439 )") 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:16.439 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:16.439 { 00:37:16.439 "params": { 00:37:16.439 "name": "Nvme$subsystem", 00:37:16.439 "trtype": "$TEST_TRANSPORT", 00:37:16.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.439 "adrfam": "ipv4", 00:37:16.440 "trsvcid": "$NVMF_PORT", 00:37:16.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.440 "hdgst": ${hdgst:-false}, 00:37:16.440 "ddgst": ${ddgst:-false} 00:37:16.440 }, 00:37:16.440 "method": "bdev_nvme_attach_controller" 00:37:16.440 } 00:37:16.440 EOF 00:37:16.440 )") 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:16.440 { 00:37:16.440 "params": { 00:37:16.440 "name": "Nvme$subsystem", 00:37:16.440 "trtype": "$TEST_TRANSPORT", 00:37:16.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:16.440 "adrfam": "ipv4", 00:37:16.440 "trsvcid": "$NVMF_PORT", 00:37:16.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:16.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:16.440 "hdgst": ${hdgst:-false}, 00:37:16.440 "ddgst": ${ddgst:-false} 00:37:16.440 }, 00:37:16.440 "method": "bdev_nvme_attach_controller" 00:37:16.440 } 00:37:16.440 EOF 00:37:16.440 )") 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:16.440 "params": { 00:37:16.440 "name": "Nvme0", 00:37:16.440 "trtype": "tcp", 00:37:16.440 "traddr": "10.0.0.2", 00:37:16.440 "adrfam": "ipv4", 00:37:16.440 "trsvcid": "4420", 00:37:16.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:16.440 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:16.440 "hdgst": false, 00:37:16.440 "ddgst": false 00:37:16.440 }, 00:37:16.440 "method": "bdev_nvme_attach_controller" 00:37:16.440 },{ 00:37:16.440 "params": { 00:37:16.440 "name": "Nvme1", 00:37:16.440 "trtype": "tcp", 00:37:16.440 "traddr": "10.0.0.2", 00:37:16.440 "adrfam": "ipv4", 00:37:16.440 "trsvcid": "4420", 00:37:16.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:16.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:16.440 "hdgst": false, 00:37:16.440 "ddgst": false 00:37:16.440 }, 00:37:16.440 "method": "bdev_nvme_attach_controller" 00:37:16.440 },{ 00:37:16.440 "params": { 00:37:16.440 "name": "Nvme2", 00:37:16.440 "trtype": "tcp", 00:37:16.440 "traddr": "10.0.0.2", 00:37:16.440 "adrfam": "ipv4", 00:37:16.440 "trsvcid": "4420", 00:37:16.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:37:16.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:37:16.440 "hdgst": false, 00:37:16.440 "ddgst": false 00:37:16.440 }, 00:37:16.440 "method": "bdev_nvme_attach_controller" 00:37:16.440 }' 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:16.440 12:00:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:16.440 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:16.440 ... 00:37:16.440 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:16.440 ... 00:37:16.440 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:37:16.440 ... 00:37:16.440 fio-3.35 00:37:16.440 Starting 24 threads 00:37:28.683 00:37:28.683 filename0: (groupid=0, jobs=1): err= 0: pid=1384031: Fri Nov 15 12:00:52 2024 00:37:28.683 read: IOPS=430, BW=1722KiB/s (1763kB/s)(16.8MiB/10004msec) 00:37:28.683 slat (nsec): min=5507, max=93863, avg=9082.08, stdev=6781.30 00:37:28.683 clat (usec): min=1148, max=564820, avg=37091.35, stdev=70171.12 00:37:28.683 lat (usec): min=1181, max=564826, avg=37100.43, stdev=70170.73 00:37:28.684 clat percentiles (usec): 00:37:28.684 | 1.00th=[ 1500], 5.00th=[ 9896], 10.00th=[ 16581], 20.00th=[ 22938], 00:37:28.684 | 30.00th=[ 23200], 40.00th=[ 23462], 50.00th=[ 23725], 60.00th=[ 23987], 00:37:28.684 | 70.00th=[ 24249], 80.00th=[ 24511], 90.00th=[ 25035], 95.00th=[ 42730], 00:37:28.684 | 99.00th=[387974], 99.50th=[404751], 99.90th=[534774], 99.95th=[566232], 00:37:28.684 | 99.99th=[566232] 00:37:28.684 bw ( KiB/s): min= 128, max= 4248, per=4.28%, avg=1671.95, stdev=1380.26, samples=19 00:37:28.684 iops : min= 32, max= 1062, avg=417.95, stdev=345.11, samples=19 00:37:28.684 lat (msec) : 2=2.55%, 4=0.79%, 10=1.69%, 20=7.48%, 50=82.61% 00:37:28.684 lat (msec) : 100=0.46%, 250=0.28%, 500=3.95%, 750=0.19% 00:37:28.684 cpu : usr=99.04%, sys=0.63%, ctx=16, majf=0, minf=33 00:37:28.684 IO depths : 1=4.9%, 2=9.8%, 4=20.7%, 8=56.7%, 16=7.8%, 32=0.0%, >=64=0.0% 00:37:28.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 complete : 0=0.0%, 4=92.9%, 8=1.5%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 issued rwts: total=4307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.684 filename0: (groupid=0, jobs=1): err= 0: pid=1384032: Fri Nov 15 12:00:52 2024 00:37:28.684 read: IOPS=408, BW=1633KiB/s (1672kB/s)(16.0MiB/10010msec) 00:37:28.684 slat (usec): min=5, max=142, avg=18.43, stdev=15.83 00:37:28.684 clat (msec): min=6, max=486, avg=39.05, stdev=69.92 00:37:28.684 lat (msec): min=6, max=486, avg=39.07, stdev=69.92 00:37:28.684 clat percentiles (msec): 00:37:28.684 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.684 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.684 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 192], 00:37:28.684 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:37:28.684 | 99.99th=[ 485] 00:37:28.684 bw ( KiB/s): min= 128, max= 3248, per=4.05%, avg=1578.89, stdev=1265.80, samples=19 00:37:28.684 iops : min= 32, max= 812, avg=394.68, stdev=316.50, samples=19 00:37:28.684 lat (msec) : 10=1.59%, 20=3.38%, 50=89.94%, 100=0.05%, 250=0.73% 00:37:28.684 lat (msec) : 500=4.31% 00:37:28.684 cpu : usr=99.02%, sys=0.67%, ctx=14, majf=0, minf=35 00:37:28.684 IO depths : 1=5.6%, 2=11.6%, 4=24.3%, 8=51.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:37:28.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 issued rwts: total=4086,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.684 filename0: (groupid=0, jobs=1): err= 0: pid=1384033: Fri Nov 15 12:00:52 2024 00:37:28.684 read: IOPS=400, BW=1602KiB/s (1640kB/s)(15.6MiB/10004msec) 00:37:28.684 slat (usec): min=5, max=138, avg=18.13, stdev=15.07 00:37:28.684 clat (msec): min=11, max=605, avg=39.80, stdev=73.89 00:37:28.684 lat (msec): min=12, max=605, avg=39.82, stdev=73.88 00:37:28.684 clat percentiles (msec): 00:37:28.684 | 1.00th=[ 17], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.684 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.684 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 41], 00:37:28.684 | 99.00th=[ 401], 99.50th=[ 460], 99.90th=[ 609], 99.95th=[ 609], 00:37:28.684 | 99.99th=[ 609] 00:37:28.684 bw ( KiB/s): min= 80, max= 2816, per=3.96%, avg=1544.95, stdev=1237.20, samples=19 00:37:28.684 iops : min= 20, max= 704, avg=386.21, stdev=309.28, samples=19 00:37:28.684 lat (msec) : 20=2.20%, 50=92.86%, 100=0.15%, 250=0.40%, 500=4.04% 00:37:28.684 lat (msec) : 750=0.35% 00:37:28.684 cpu : usr=99.17%, sys=0.51%, ctx=14, majf=0, minf=39 00:37:28.684 IO depths : 1=5.1%, 2=10.5%, 4=22.8%, 8=53.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:28.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 issued rwts: total=4006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.684 filename0: (groupid=0, jobs=1): err= 0: pid=1384034: Fri Nov 15 12:00:52 2024 00:37:28.684 read: IOPS=395, BW=1580KiB/s (1618kB/s)(15.4MiB/10004msec) 00:37:28.684 slat (usec): min=5, max=114, avg=15.90, stdev=13.72 00:37:28.684 clat (msec): min=7, max=575, avg=40.40, stdev=89.01 00:37:28.684 lat (msec): min=7, max=575, avg=40.42, stdev=89.01 00:37:28.684 clat percentiles (msec): 00:37:28.684 | 1.00th=[ 11], 5.00th=[ 17], 10.00th=[ 20], 20.00th=[ 23], 00:37:28.684 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.684 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 31], 95.00th=[ 39], 00:37:28.684 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:37:28.684 | 99.99th=[ 575] 00:37:28.684 bw ( KiB/s): min= 24, max= 2880, per=3.89%, avg=1516.05, stdev=1257.48, samples=19 00:37:28.684 iops : min= 6, max= 720, avg=379.00, stdev=314.36, samples=19 00:37:28.684 lat (msec) : 10=0.81%, 20=11.51%, 50=84.11%, 250=0.33%, 500=0.40% 00:37:28.684 lat (msec) : 750=2.83% 00:37:28.684 cpu : usr=98.92%, sys=0.75%, ctx=17, majf=0, minf=26 00:37:28.684 IO depths : 1=1.2%, 2=2.4%, 4=8.4%, 8=74.5%, 16=13.6%, 32=0.0%, >=64=0.0% 00:37:28.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 complete : 0=0.0%, 4=90.3%, 8=6.3%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 issued rwts: total=3952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.684 filename0: (groupid=0, jobs=1): err= 0: pid=1384035: Fri Nov 15 12:00:52 2024 00:37:28.684 read: IOPS=398, BW=1596KiB/s (1634kB/s)(15.6MiB/10001msec) 00:37:28.684 slat (usec): min=5, max=131, avg=24.75, stdev=18.90 00:37:28.684 clat (msec): min=9, max=696, avg=39.90, stdev=88.60 00:37:28.684 lat (msec): min=9, max=696, avg=39.93, stdev=88.60 00:37:28.684 clat percentiles (msec): 00:37:28.684 | 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 22], 20.00th=[ 23], 00:37:28.684 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.684 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 33], 00:37:28.684 | 99.00th=[ 550], 99.50th=[ 575], 99.90th=[ 693], 99.95th=[ 701], 00:37:28.684 | 99.99th=[ 701] 00:37:28.684 bw ( KiB/s): min= 112, max= 3024, per=4.13%, avg=1611.83, stdev=1250.48, samples=18 00:37:28.684 iops : min= 28, max= 756, avg=402.94, stdev=312.61, samples=18 00:37:28.684 lat (msec) : 10=0.40%, 20=7.72%, 50=88.27%, 250=0.45%, 500=0.50% 00:37:28.684 lat (msec) : 750=2.66% 00:37:28.684 cpu : usr=99.03%, sys=0.65%, ctx=14, majf=0, minf=43 00:37:28.684 IO depths : 1=3.9%, 2=8.0%, 4=18.5%, 8=60.1%, 16=9.6%, 32=0.0%, >=64=0.0% 00:37:28.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 complete : 0=0.0%, 4=92.7%, 8=2.4%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 issued rwts: total=3990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.684 filename0: (groupid=0, jobs=1): err= 0: pid=1384036: Fri Nov 15 12:00:52 2024 00:37:28.684 read: IOPS=405, BW=1622KiB/s (1661kB/s)(15.9MiB/10006msec) 00:37:28.684 slat (usec): min=5, max=126, avg=19.19, stdev=16.41 00:37:28.684 clat (msec): min=9, max=605, avg=39.31, stdev=73.73 00:37:28.684 lat (msec): min=9, max=605, avg=39.32, stdev=73.73 00:37:28.684 clat percentiles (msec): 00:37:28.684 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 23], 00:37:28.684 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.684 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 44], 00:37:28.684 | 99.00th=[ 409], 99.50th=[ 456], 99.90th=[ 584], 99.95th=[ 609], 00:37:28.684 | 99.99th=[ 609] 00:37:28.684 bw ( KiB/s): min= 80, max= 2864, per=3.98%, avg=1551.16, stdev=1234.50, samples=19 00:37:28.684 iops : min= 20, max= 716, avg=387.79, stdev=308.62, samples=19 00:37:28.684 lat (msec) : 10=0.39%, 20=6.78%, 50=87.95%, 100=0.10%, 250=0.69% 00:37:28.684 lat (msec) : 500=3.65%, 750=0.44% 00:37:28.684 cpu : usr=99.08%, sys=0.59%, ctx=14, majf=0, minf=24 00:37:28.684 IO depths : 1=3.4%, 2=7.7%, 4=19.3%, 8=60.1%, 16=9.5%, 32=0.0%, >=64=0.0% 00:37:28.684 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 complete : 0=0.0%, 4=92.8%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.684 issued rwts: total=4058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.684 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.684 filename0: (groupid=0, jobs=1): err= 0: pid=1384037: Fri Nov 15 12:00:52 2024 00:37:28.684 read: IOPS=416, BW=1665KiB/s (1705kB/s)(16.3MiB/10010msec) 00:37:28.684 slat (nsec): min=5505, max=85551, avg=13441.24, stdev=11455.98 00:37:28.684 clat (msec): min=6, max=609, avg=38.34, stdev=72.19 00:37:28.684 lat (msec): min=6, max=609, avg=38.35, stdev=72.19 00:37:28.684 clat percentiles (msec): 00:37:28.684 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 23], 00:37:28.684 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.684 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 39], 00:37:28.684 | 99.00th=[ 401], 99.50th=[ 456], 99.90th=[ 609], 99.95th=[ 609], 00:37:28.684 | 99.99th=[ 609] 00:37:28.684 bw ( KiB/s): min= 80, max= 3120, per=4.13%, avg=1612.58, stdev=1278.42, samples=19 00:37:28.684 iops : min= 20, max= 780, avg=403.11, stdev=319.65, samples=19 00:37:28.684 lat (msec) : 10=2.21%, 20=9.31%, 50=83.73%, 100=0.14%, 250=0.29% 00:37:28.685 lat (msec) : 500=4.08%, 750=0.24% 00:37:28.685 cpu : usr=99.16%, sys=0.51%, ctx=15, majf=0, minf=43 00:37:28.685 IO depths : 1=4.6%, 2=9.2%, 4=20.1%, 8=57.8%, 16=8.2%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=92.9%, 8=1.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=4166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename0: (groupid=0, jobs=1): err= 0: pid=1384038: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=422, BW=1688KiB/s (1729kB/s)(16.5MiB/10009msec) 00:37:28.685 slat (usec): min=5, max=116, avg=14.90, stdev=14.66 00:37:28.685 clat (msec): min=10, max=550, avg=37.80, stdev=71.01 00:37:28.685 lat (msec): min=10, max=550, avg=37.82, stdev=71.00 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 17], 20.00th=[ 21], 00:37:28.685 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.685 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 39], 00:37:28.685 | 99.00th=[ 393], 99.50th=[ 405], 99.90th=[ 510], 99.95th=[ 550], 00:37:28.685 | 99.99th=[ 550] 00:37:28.685 bw ( KiB/s): min= 128, max= 3024, per=4.20%, avg=1637.05, stdev=1306.46, samples=19 00:37:28.685 iops : min= 32, max= 756, avg=409.26, stdev=326.62, samples=19 00:37:28.685 lat (msec) : 20=19.84%, 50=75.33%, 100=0.24%, 250=0.52%, 500=3.88% 00:37:28.685 lat (msec) : 750=0.19% 00:37:28.685 cpu : usr=99.10%, sys=0.59%, ctx=21, majf=0, minf=28 00:37:28.685 IO depths : 1=2.6%, 2=5.2%, 4=13.2%, 8=68.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=90.9%, 8=4.2%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=4224,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename1: (groupid=0, jobs=1): err= 0: pid=1384039: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=415, BW=1662KiB/s (1702kB/s)(16.2MiB/10010msec) 00:37:28.685 slat (usec): min=5, max=139, avg=14.06, stdev=13.23 00:37:28.685 clat (msec): min=8, max=528, avg=38.39, stdev=72.41 00:37:28.685 lat (msec): min=8, max=528, avg=38.40, stdev=72.41 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 11], 5.00th=[ 16], 10.00th=[ 18], 20.00th=[ 23], 00:37:28.685 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.685 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 34], 00:37:28.685 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 531], 99.95th=[ 531], 00:37:28.685 | 99.99th=[ 531] 00:37:28.685 bw ( KiB/s): min= 128, max= 3168, per=4.13%, avg=1610.05, stdev=1298.90, samples=19 00:37:28.685 iops : min= 32, max= 792, avg=402.47, stdev=324.77, samples=19 00:37:28.685 lat (msec) : 10=0.67%, 20=11.18%, 50=83.53%, 250=0.38%, 500=3.99% 00:37:28.685 lat (msec) : 750=0.24% 00:37:28.685 cpu : usr=99.01%, sys=0.66%, ctx=19, majf=0, minf=36 00:37:28.685 IO depths : 1=4.9%, 2=10.0%, 4=21.4%, 8=56.0%, 16=7.7%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=93.1%, 8=1.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=4160,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename1: (groupid=0, jobs=1): err= 0: pid=1384040: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=404, BW=1618KiB/s (1657kB/s)(15.8MiB/10013msec) 00:37:28.685 slat (usec): min=5, max=104, avg=15.07, stdev=12.98 00:37:28.685 clat (msec): min=11, max=600, avg=39.44, stdev=79.66 00:37:28.685 lat (msec): min=11, max=600, avg=39.46, stdev=79.65 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 19], 20.00th=[ 23], 00:37:28.685 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.685 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 30], 95.00th=[ 35], 00:37:28.685 | 99.00th=[ 527], 99.50th=[ 550], 99.90th=[ 600], 99.95th=[ 600], 00:37:28.685 | 99.99th=[ 600] 00:37:28.685 bw ( KiB/s): min= 80, max= 2928, per=4.36%, avg=1701.32, stdev=1240.95, samples=19 00:37:28.685 iops : min= 20, max= 732, avg=425.32, stdev=310.23, samples=19 00:37:28.685 lat (msec) : 20=14.32%, 50=81.43%, 250=0.40%, 500=2.62%, 750=1.23% 00:37:28.685 cpu : usr=98.95%, sys=0.74%, ctx=16, majf=0, minf=30 00:37:28.685 IO depths : 1=1.4%, 2=3.4%, 4=9.8%, 8=72.3%, 16=13.1%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=90.3%, 8=5.8%, 16=3.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=4050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename1: (groupid=0, jobs=1): err= 0: pid=1384041: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=404, BW=1620KiB/s (1658kB/s)(15.8MiB/10008msec) 00:37:28.685 slat (usec): min=5, max=101, avg=22.95, stdev=17.84 00:37:28.685 clat (msec): min=8, max=583, avg=39.32, stdev=73.62 00:37:28.685 lat (msec): min=8, max=583, avg=39.35, stdev=73.62 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 14], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.685 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.685 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 37], 00:37:28.685 | 99.00th=[ 397], 99.50th=[ 418], 99.90th=[ 584], 99.95th=[ 584], 00:37:28.685 | 99.99th=[ 584] 00:37:28.685 bw ( KiB/s): min= 128, max= 2736, per=3.98%, avg=1552.84, stdev=1232.94, samples=19 00:37:28.685 iops : min= 32, max= 684, avg=388.21, stdev=308.24, samples=19 00:37:28.685 lat (msec) : 10=0.15%, 20=6.19%, 50=88.92%, 250=0.39%, 500=4.00% 00:37:28.685 lat (msec) : 750=0.35% 00:37:28.685 cpu : usr=99.03%, sys=0.66%, ctx=38, majf=0, minf=47 00:37:28.685 IO depths : 1=5.2%, 2=10.6%, 4=22.2%, 8=54.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=93.3%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=4052,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename1: (groupid=0, jobs=1): err= 0: pid=1384042: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=415, BW=1663KiB/s (1703kB/s)(16.3MiB/10010msec) 00:37:28.685 slat (usec): min=5, max=105, avg=13.97, stdev=12.29 00:37:28.685 clat (msec): min=6, max=431, avg=38.37, stdev=69.98 00:37:28.685 lat (msec): min=6, max=431, avg=38.39, stdev=69.98 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 8], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 23], 00:37:28.685 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.685 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 46], 00:37:28.685 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 409], 99.95th=[ 409], 00:37:28.685 | 99.99th=[ 430] 00:37:28.685 bw ( KiB/s): min= 128, max= 3200, per=4.13%, avg=1610.89, stdev=1287.24, samples=19 00:37:28.685 iops : min= 32, max= 800, avg=402.68, stdev=321.85, samples=19 00:37:28.685 lat (msec) : 10=1.63%, 20=9.97%, 50=83.40%, 100=0.38%, 500=4.61% 00:37:28.685 cpu : usr=99.06%, sys=0.65%, ctx=13, majf=0, minf=52 00:37:28.685 IO depths : 1=3.9%, 2=8.4%, 4=22.0%, 8=57.0%, 16=8.7%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=4162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename1: (groupid=0, jobs=1): err= 0: pid=1384043: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=393, BW=1575KiB/s (1612kB/s)(15.4MiB/10004msec) 00:37:28.685 slat (usec): min=5, max=165, avg=24.00, stdev=19.68 00:37:28.685 clat (msec): min=4, max=575, avg=40.46, stdev=88.92 00:37:28.685 lat (msec): min=4, max=575, avg=40.48, stdev=88.92 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 14], 5.00th=[ 20], 10.00th=[ 23], 20.00th=[ 23], 00:37:28.685 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.685 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 36], 00:37:28.685 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 575], 99.95th=[ 575], 00:37:28.685 | 99.99th=[ 575] 00:37:28.685 bw ( KiB/s): min= 24, max= 2704, per=3.86%, avg=1505.11, stdev=1245.85, samples=19 00:37:28.685 iops : min= 6, max= 676, avg=376.26, stdev=311.45, samples=19 00:37:28.685 lat (msec) : 10=0.25%, 20=5.18%, 50=90.99%, 250=0.33%, 500=0.41% 00:37:28.685 lat (msec) : 750=2.84% 00:37:28.685 cpu : usr=99.02%, sys=0.58%, ctx=41, majf=0, minf=40 00:37:28.685 IO depths : 1=1.0%, 2=4.6%, 4=16.4%, 8=64.8%, 16=13.3%, 32=0.0%, >=64=0.0% 00:37:28.685 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 complete : 0=0.0%, 4=92.4%, 8=3.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.685 issued rwts: total=3938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.685 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.685 filename1: (groupid=0, jobs=1): err= 0: pid=1384044: Fri Nov 15 12:00:52 2024 00:37:28.685 read: IOPS=396, BW=1588KiB/s (1626kB/s)(15.5MiB/10003msec) 00:37:28.685 slat (usec): min=5, max=144, avg=25.29, stdev=18.28 00:37:28.685 clat (msec): min=4, max=695, avg=40.09, stdev=88.43 00:37:28.685 lat (msec): min=4, max=695, avg=40.11, stdev=88.43 00:37:28.685 clat percentiles (msec): 00:37:28.685 | 1.00th=[ 13], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.686 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 30], 00:37:28.686 | 99.00th=[ 550], 99.50th=[ 575], 99.90th=[ 693], 99.95th=[ 693], 00:37:28.686 | 99.99th=[ 693] 00:37:28.686 bw ( KiB/s): min= 16, max= 2832, per=3.89%, avg=1517.47, stdev=1258.71, samples=19 00:37:28.686 iops : min= 4, max= 708, avg=379.37, stdev=314.68, samples=19 00:37:28.686 lat (msec) : 10=0.73%, 20=4.01%, 50=91.64%, 250=0.35%, 500=0.65% 00:37:28.686 lat (msec) : 750=2.62% 00:37:28.686 cpu : usr=98.99%, sys=0.71%, ctx=14, majf=0, minf=35 00:37:28.686 IO depths : 1=4.9%, 2=10.3%, 4=21.9%, 8=54.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:37:28.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 issued rwts: total=3970,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.686 filename1: (groupid=0, jobs=1): err= 0: pid=1384045: Fri Nov 15 12:00:52 2024 00:37:28.686 read: IOPS=415, BW=1661KiB/s (1701kB/s)(16.2MiB/10003msec) 00:37:28.686 slat (usec): min=5, max=158, avg=24.15, stdev=20.98 00:37:28.686 clat (msec): min=12, max=549, avg=38.31, stdev=70.74 00:37:28.686 lat (msec): min=12, max=549, avg=38.34, stdev=70.73 00:37:28.686 clat percentiles (msec): 00:37:28.686 | 1.00th=[ 15], 5.00th=[ 17], 10.00th=[ 18], 20.00th=[ 22], 00:37:28.686 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 39], 00:37:28.686 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 531], 99.95th=[ 550], 00:37:28.686 | 99.99th=[ 550] 00:37:28.686 bw ( KiB/s): min= 96, max= 3024, per=4.08%, avg=1592.68, stdev=1278.59, samples=19 00:37:28.686 iops : min= 24, max= 756, avg=398.16, stdev=319.64, samples=19 00:37:28.686 lat (msec) : 20=15.29%, 50=79.80%, 250=0.39%, 500=4.33%, 750=0.19% 00:37:28.686 cpu : usr=98.80%, sys=0.79%, ctx=44, majf=0, minf=25 00:37:28.686 IO depths : 1=3.3%, 2=6.7%, 4=15.5%, 8=64.4%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:28.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 complete : 0=0.0%, 4=91.5%, 8=3.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 issued rwts: total=4154,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.686 filename1: (groupid=0, jobs=1): err= 0: pid=1384046: Fri Nov 15 12:00:52 2024 00:37:28.686 read: IOPS=405, BW=1623KiB/s (1661kB/s)(15.9MiB/10004msec) 00:37:28.686 slat (usec): min=5, max=101, avg=24.79, stdev=19.09 00:37:28.686 clat (msec): min=4, max=594, avg=39.23, stdev=71.59 00:37:28.686 lat (msec): min=4, max=594, avg=39.25, stdev=71.59 00:37:28.686 clat percentiles (msec): 00:37:28.686 | 1.00th=[ 13], 5.00th=[ 18], 10.00th=[ 21], 20.00th=[ 23], 00:37:28.686 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 78], 00:37:28.686 | 99.00th=[ 397], 99.50th=[ 401], 99.90th=[ 518], 99.95th=[ 592], 00:37:28.686 | 99.99th=[ 592] 00:37:28.686 bw ( KiB/s): min= 128, max= 2768, per=3.99%, avg=1556.74, stdev=1237.61, samples=19 00:37:28.686 iops : min= 32, max= 692, avg=389.16, stdev=309.38, samples=19 00:37:28.686 lat (msec) : 10=0.30%, 20=8.58%, 50=86.10%, 100=0.17%, 250=0.54% 00:37:28.686 lat (msec) : 500=4.19%, 750=0.12% 00:37:28.686 cpu : usr=98.97%, sys=0.71%, ctx=26, majf=0, minf=43 00:37:28.686 IO depths : 1=3.4%, 2=7.0%, 4=16.0%, 8=63.8%, 16=9.8%, 32=0.0%, >=64=0.0% 00:37:28.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 complete : 0=0.0%, 4=91.7%, 8=3.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 issued rwts: total=4058,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.686 filename2: (groupid=0, jobs=1): err= 0: pid=1384047: Fri Nov 15 12:00:52 2024 00:37:28.686 read: IOPS=408, BW=1636KiB/s (1675kB/s)(16.0MiB/10005msec) 00:37:28.686 slat (usec): min=5, max=139, avg=20.40, stdev=16.13 00:37:28.686 clat (msec): min=4, max=619, avg=38.96, stdev=73.32 00:37:28.686 lat (msec): min=4, max=619, avg=38.98, stdev=73.32 00:37:28.686 clat percentiles (msec): 00:37:28.686 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 19], 20.00th=[ 23], 00:37:28.686 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 46], 00:37:28.686 | 99.00th=[ 401], 99.50th=[ 430], 99.90th=[ 550], 99.95th=[ 617], 00:37:28.686 | 99.99th=[ 617] 00:37:28.686 bw ( KiB/s): min= 80, max= 2896, per=4.01%, avg=1564.63, stdev=1256.84, samples=19 00:37:28.686 iops : min= 20, max= 724, avg=391.16, stdev=314.21, samples=19 00:37:28.686 lat (msec) : 10=0.49%, 20=11.49%, 50=83.33%, 100=0.10%, 250=0.15% 00:37:28.686 lat (msec) : 500=4.20%, 750=0.24% 00:37:28.686 cpu : usr=99.03%, sys=0.66%, ctx=16, majf=0, minf=26 00:37:28.686 IO depths : 1=3.2%, 2=6.4%, 4=14.9%, 8=65.2%, 16=10.4%, 32=0.0%, >=64=0.0% 00:37:28.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 complete : 0=0.0%, 4=91.5%, 8=3.8%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 issued rwts: total=4092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.686 filename2: (groupid=0, jobs=1): err= 0: pid=1384048: Fri Nov 15 12:00:52 2024 00:37:28.686 read: IOPS=408, BW=1636KiB/s (1675kB/s)(16.0MiB/10001msec) 00:37:28.686 slat (usec): min=5, max=136, avg=23.58, stdev=18.74 00:37:28.686 clat (msec): min=6, max=493, avg=38.92, stdev=69.95 00:37:28.686 lat (msec): min=6, max=493, avg=38.94, stdev=69.94 00:37:28.686 clat percentiles (msec): 00:37:28.686 | 1.00th=[ 9], 5.00th=[ 18], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.686 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 192], 00:37:28.686 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 409], 99.95th=[ 409], 00:37:28.686 | 99.99th=[ 493] 00:37:28.686 bw ( KiB/s): min= 128, max= 3200, per=4.05%, avg=1580.58, stdev=1265.91, samples=19 00:37:28.686 iops : min= 32, max= 800, avg=395.11, stdev=316.53, samples=19 00:37:28.686 lat (msec) : 10=1.91%, 20=4.08%, 50=88.92%, 100=0.05%, 250=0.73% 00:37:28.686 lat (msec) : 500=4.30% 00:37:28.686 cpu : usr=98.98%, sys=0.70%, ctx=29, majf=0, minf=32 00:37:28.686 IO depths : 1=5.2%, 2=10.8%, 4=22.8%, 8=53.8%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:28.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 issued rwts: total=4090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.686 filename2: (groupid=0, jobs=1): err= 0: pid=1384049: Fri Nov 15 12:00:52 2024 00:37:28.686 read: IOPS=405, BW=1623KiB/s (1661kB/s)(15.9MiB/10024msec) 00:37:28.686 slat (usec): min=5, max=158, avg=17.27, stdev=17.02 00:37:28.686 clat (msec): min=11, max=460, avg=39.30, stdev=70.33 00:37:28.686 lat (msec): min=11, max=460, avg=39.31, stdev=70.33 00:37:28.686 clat percentiles (msec): 00:37:28.686 | 1.00th=[ 15], 5.00th=[ 22], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.686 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 75], 00:37:28.686 | 99.00th=[ 393], 99.50th=[ 409], 99.90th=[ 456], 99.95th=[ 456], 00:37:28.686 | 99.99th=[ 460] 00:37:28.686 bw ( KiB/s): min= 128, max= 2816, per=4.15%, avg=1620.00, stdev=1228.64, samples=20 00:37:28.686 iops : min= 32, max= 704, avg=405.00, stdev=307.16, samples=20 00:37:28.686 lat (msec) : 20=4.40%, 50=90.33%, 100=0.54%, 500=4.72% 00:37:28.686 cpu : usr=98.90%, sys=0.70%, ctx=30, majf=0, minf=44 00:37:28.686 IO depths : 1=5.3%, 2=10.6%, 4=22.5%, 8=54.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:37:28.686 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 complete : 0=0.0%, 4=93.4%, 8=0.8%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.686 issued rwts: total=4066,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.686 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.686 filename2: (groupid=0, jobs=1): err= 0: pid=1384050: Fri Nov 15 12:00:52 2024 00:37:28.686 read: IOPS=401, BW=1606KiB/s (1644kB/s)(15.7MiB/10004msec) 00:37:28.686 slat (usec): min=5, max=145, avg=26.44, stdev=21.56 00:37:28.686 clat (msec): min=4, max=695, avg=39.63, stdev=88.12 00:37:28.686 lat (msec): min=4, max=695, avg=39.65, stdev=88.12 00:37:28.686 clat percentiles (msec): 00:37:28.686 | 1.00th=[ 13], 5.00th=[ 17], 10.00th=[ 21], 20.00th=[ 23], 00:37:28.686 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.686 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 34], 00:37:28.686 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 667], 99.95th=[ 693], 00:37:28.686 | 99.99th=[ 693] 00:37:28.686 bw ( KiB/s): min= 16, max= 2800, per=3.91%, avg=1526.16, stdev=1266.93, samples=19 00:37:28.686 iops : min= 4, max= 700, avg=381.53, stdev=316.72, samples=19 00:37:28.687 lat (msec) : 10=0.25%, 20=9.69%, 50=86.48%, 250=0.35%, 500=0.65% 00:37:28.687 lat (msec) : 750=2.59% 00:37:28.687 cpu : usr=99.07%, sys=0.62%, ctx=15, majf=0, minf=28 00:37:28.687 IO depths : 1=3.7%, 2=8.2%, 4=19.1%, 8=59.6%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:28.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 complete : 0=0.0%, 4=92.6%, 8=2.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 issued rwts: total=4016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.687 filename2: (groupid=0, jobs=1): err= 0: pid=1384051: Fri Nov 15 12:00:52 2024 00:37:28.687 read: IOPS=399, BW=1599KiB/s (1638kB/s)(15.6MiB/10013msec) 00:37:28.687 slat (usec): min=5, max=163, avg=19.32, stdev=18.17 00:37:28.687 clat (msec): min=11, max=572, avg=39.86, stdev=76.94 00:37:28.687 lat (msec): min=11, max=572, avg=39.88, stdev=76.93 00:37:28.687 clat percentiles (msec): 00:37:28.687 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 20], 20.00th=[ 23], 00:37:28.687 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.687 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 30], 95.00th=[ 39], 00:37:28.687 | 99.00th=[ 409], 99.50th=[ 535], 99.90th=[ 542], 99.95th=[ 542], 00:37:28.687 | 99.99th=[ 575] 00:37:28.687 bw ( KiB/s): min= 64, max= 3024, per=3.95%, avg=1539.79, stdev=1262.72, samples=19 00:37:28.687 iops : min= 16, max= 756, avg=384.95, stdev=315.68, samples=19 00:37:28.687 lat (msec) : 20=10.87%, 50=84.74%, 500=3.75%, 750=0.65% 00:37:28.687 cpu : usr=99.05%, sys=0.63%, ctx=16, majf=0, minf=37 00:37:28.687 IO depths : 1=3.0%, 2=6.4%, 4=16.3%, 8=64.3%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:28.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 complete : 0=0.0%, 4=91.8%, 8=3.0%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 issued rwts: total=4003,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.687 filename2: (groupid=0, jobs=1): err= 0: pid=1384052: Fri Nov 15 12:00:52 2024 00:37:28.687 read: IOPS=398, BW=1593KiB/s (1631kB/s)(15.6MiB/10010msec) 00:37:28.687 slat (usec): min=4, max=114, avg=24.25, stdev=17.62 00:37:28.687 clat (msec): min=9, max=688, avg=39.97, stdev=81.51 00:37:28.687 lat (msec): min=9, max=688, avg=40.00, stdev=81.51 00:37:28.687 clat percentiles (msec): 00:37:28.687 | 1.00th=[ 16], 5.00th=[ 23], 10.00th=[ 23], 20.00th=[ 24], 00:37:28.687 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.687 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 26], 95.00th=[ 32], 00:37:28.687 | 99.00th=[ 523], 99.50th=[ 531], 99.90th=[ 667], 99.95th=[ 693], 00:37:28.687 | 99.99th=[ 693] 00:37:28.687 bw ( KiB/s): min= 112, max= 2816, per=3.92%, avg=1530.37, stdev=1249.50, samples=19 00:37:28.687 iops : min= 28, max= 704, avg=382.58, stdev=312.36, samples=19 00:37:28.687 lat (msec) : 10=0.25%, 20=3.11%, 50=92.22%, 100=0.25%, 250=0.50% 00:37:28.687 lat (msec) : 500=2.16%, 750=1.51% 00:37:28.687 cpu : usr=99.18%, sys=0.52%, ctx=21, majf=0, minf=41 00:37:28.687 IO depths : 1=5.0%, 2=10.3%, 4=22.5%, 8=54.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:37:28.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 issued rwts: total=3986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.687 filename2: (groupid=0, jobs=1): err= 0: pid=1384053: Fri Nov 15 12:00:52 2024 00:37:28.687 read: IOPS=420, BW=1681KiB/s (1721kB/s)(16.4MiB/10016msec) 00:37:28.687 slat (usec): min=5, max=159, avg=19.37, stdev=19.51 00:37:28.687 clat (msec): min=6, max=531, avg=37.91, stdev=72.13 00:37:28.687 lat (msec): min=6, max=531, avg=37.93, stdev=72.12 00:37:28.687 clat percentiles (msec): 00:37:28.687 | 1.00th=[ 9], 5.00th=[ 15], 10.00th=[ 18], 20.00th=[ 21], 00:37:28.687 | 30.00th=[ 23], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.687 | 70.00th=[ 24], 80.00th=[ 25], 90.00th=[ 27], 95.00th=[ 36], 00:37:28.687 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 531], 99.95th=[ 531], 00:37:28.687 | 99.99th=[ 531] 00:37:28.687 bw ( KiB/s): min= 128, max= 3168, per=4.31%, avg=1679.55, stdev=1305.08, samples=20 00:37:28.687 iops : min= 32, max= 792, avg=419.85, stdev=326.32, samples=20 00:37:28.687 lat (msec) : 10=1.28%, 20=17.30%, 50=76.86%, 250=0.38%, 500=3.94% 00:37:28.687 lat (msec) : 750=0.24% 00:37:28.687 cpu : usr=99.12%, sys=0.56%, ctx=13, majf=0, minf=29 00:37:28.687 IO depths : 1=3.0%, 2=6.3%, 4=15.8%, 8=64.9%, 16=10.0%, 32=0.0%, >=64=0.0% 00:37:28.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 complete : 0=0.0%, 4=91.5%, 8=3.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 issued rwts: total=4209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.687 filename2: (groupid=0, jobs=1): err= 0: pid=1384054: Fri Nov 15 12:00:52 2024 00:37:28.687 read: IOPS=393, BW=1573KiB/s (1610kB/s)(15.4MiB/10001msec) 00:37:28.687 slat (usec): min=4, max=131, avg=23.67, stdev=18.50 00:37:28.687 clat (msec): min=7, max=694, avg=40.50, stdev=88.85 00:37:28.687 lat (msec): min=7, max=694, avg=40.53, stdev=88.85 00:37:28.687 clat percentiles (msec): 00:37:28.687 | 1.00th=[ 15], 5.00th=[ 18], 10.00th=[ 22], 20.00th=[ 23], 00:37:28.687 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 24], 60.00th=[ 24], 00:37:28.687 | 70.00th=[ 25], 80.00th=[ 25], 90.00th=[ 28], 95.00th=[ 37], 00:37:28.687 | 99.00th=[ 550], 99.50th=[ 575], 99.90th=[ 667], 99.95th=[ 693], 00:37:28.687 | 99.99th=[ 693] 00:37:28.687 bw ( KiB/s): min= 16, max= 2864, per=3.88%, avg=1514.11, stdev=1252.61, samples=19 00:37:28.687 iops : min= 4, max= 716, avg=378.53, stdev=313.15, samples=19 00:37:28.687 lat (msec) : 10=0.15%, 20=7.53%, 50=88.66%, 250=0.41%, 500=0.51% 00:37:28.687 lat (msec) : 750=2.75% 00:37:28.687 cpu : usr=99.00%, sys=0.67%, ctx=22, majf=0, minf=34 00:37:28.687 IO depths : 1=3.5%, 2=7.5%, 4=17.7%, 8=61.9%, 16=9.4%, 32=0.0%, >=64=0.0% 00:37:28.687 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 complete : 0=0.0%, 4=92.3%, 8=2.4%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:28.687 issued rwts: total=3932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:28.687 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:28.687 00:37:28.687 Run status group 0 (all jobs): 00:37:28.687 READ: bw=38.1MiB/s (39.9MB/s), 1573KiB/s-1722KiB/s (1610kB/s-1763kB/s), io=382MiB (400MB), run=10001-10024msec 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:28.687 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 bdev_null0 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 [2024-11-15 12:00:52.740335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 bdev_null1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.688 { 00:37:28.688 "params": { 00:37:28.688 "name": "Nvme$subsystem", 00:37:28.688 "trtype": "$TEST_TRANSPORT", 00:37:28.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.688 "adrfam": "ipv4", 00:37:28.688 "trsvcid": "$NVMF_PORT", 00:37:28.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.688 "hdgst": ${hdgst:-false}, 00:37:28.688 "ddgst": ${ddgst:-false} 00:37:28.688 }, 00:37:28.688 "method": "bdev_nvme_attach_controller" 00:37:28.688 } 00:37:28.688 EOF 00:37:28.688 )") 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:28.688 { 00:37:28.688 "params": { 00:37:28.688 "name": "Nvme$subsystem", 00:37:28.688 "trtype": "$TEST_TRANSPORT", 00:37:28.688 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:28.688 "adrfam": "ipv4", 00:37:28.688 "trsvcid": "$NVMF_PORT", 00:37:28.688 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:28.688 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:28.688 "hdgst": ${hdgst:-false}, 00:37:28.688 "ddgst": ${ddgst:-false} 00:37:28.688 }, 00:37:28.688 "method": "bdev_nvme_attach_controller" 00:37:28.688 } 00:37:28.688 EOF 00:37:28.688 )") 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:28.688 "params": { 00:37:28.688 "name": "Nvme0", 00:37:28.688 "trtype": "tcp", 00:37:28.688 "traddr": "10.0.0.2", 00:37:28.688 "adrfam": "ipv4", 00:37:28.688 "trsvcid": "4420", 00:37:28.688 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:28.688 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:28.688 "hdgst": false, 00:37:28.688 "ddgst": false 00:37:28.688 }, 00:37:28.688 "method": "bdev_nvme_attach_controller" 00:37:28.688 },{ 00:37:28.688 "params": { 00:37:28.688 "name": "Nvme1", 00:37:28.688 "trtype": "tcp", 00:37:28.688 "traddr": "10.0.0.2", 00:37:28.688 "adrfam": "ipv4", 00:37:28.688 "trsvcid": "4420", 00:37:28.688 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:28.688 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:28.688 "hdgst": false, 00:37:28.688 "ddgst": false 00:37:28.688 }, 00:37:28.688 "method": "bdev_nvme_attach_controller" 00:37:28.688 }' 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:28.688 12:00:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:28.688 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:28.688 ... 00:37:28.688 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:28.688 ... 00:37:28.688 fio-3.35 00:37:28.688 Starting 4 threads 00:37:34.048 00:37:34.048 filename0: (groupid=0, jobs=1): err= 0: pid=1386354: Fri Nov 15 12:00:59 2024 00:37:34.048 read: IOPS=2914, BW=22.8MiB/s (23.9MB/s)(114MiB/5002msec) 00:37:34.048 slat (nsec): min=5495, max=52638, avg=7586.91, stdev=2995.18 00:37:34.048 clat (usec): min=1143, max=5350, avg=2726.14, stdev=274.50 00:37:34.048 lat (usec): min=1149, max=5378, avg=2733.73, stdev=274.40 00:37:34.048 clat percentiles (usec): 00:37:34.048 | 1.00th=[ 1893], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2606], 00:37:34.048 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:34.048 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3064], 00:37:34.048 | 99.00th=[ 3752], 99.50th=[ 4047], 99.90th=[ 4490], 99.95th=[ 4555], 00:37:34.048 | 99.99th=[ 5276] 00:37:34.048 bw ( KiB/s): min=23008, max=23695, per=25.27%, avg=23317.22, stdev=181.82, samples=9 00:37:34.049 iops : min= 2876, max= 2961, avg=2914.56, stdev=22.50, samples=9 00:37:34.049 lat (msec) : 2=1.43%, 4=98.03%, 10=0.54% 00:37:34.049 cpu : usr=95.28%, sys=4.46%, ctx=5, majf=0, minf=97 00:37:34.049 IO depths : 1=0.1%, 2=0.3%, 4=68.6%, 8=31.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 issued rwts: total=14578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.049 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:34.049 filename0: (groupid=0, jobs=1): err= 0: pid=1386355: Fri Nov 15 12:00:59 2024 00:37:34.049 read: IOPS=2835, BW=22.2MiB/s (23.2MB/s)(111MiB/5002msec) 00:37:34.049 slat (nsec): min=5489, max=29258, avg=6194.78, stdev=1965.32 00:37:34.049 clat (usec): min=1383, max=44905, avg=2804.56, stdev=1048.53 00:37:34.049 lat (usec): min=1389, max=44932, avg=2810.75, stdev=1048.68 00:37:34.049 clat percentiles (usec): 00:37:34.049 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2671], 00:37:34.049 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2769], 00:37:34.049 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 3032], 95.00th=[ 3392], 00:37:34.049 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4948], 99.95th=[44827], 00:37:34.049 | 99.99th=[44827] 00:37:34.049 bw ( KiB/s): min=21045, max=23072, per=24.56%, avg=22658.33, stdev=633.66, samples=9 00:37:34.049 iops : min= 2630, max= 2884, avg=2832.22, stdev=79.41, samples=9 00:37:34.049 lat (msec) : 2=0.48%, 4=98.18%, 10=1.28%, 50=0.06% 00:37:34.049 cpu : usr=95.92%, sys=3.80%, ctx=7, majf=0, minf=110 00:37:34.049 IO depths : 1=0.1%, 2=0.2%, 4=71.9%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 issued rwts: total=14182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.049 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:34.049 filename1: (groupid=0, jobs=1): err= 0: pid=1386356: Fri Nov 15 12:00:59 2024 00:37:34.049 read: IOPS=2891, BW=22.6MiB/s (23.7MB/s)(113MiB/5003msec) 00:37:34.049 slat (nsec): min=5485, max=50333, avg=6275.78, stdev=2249.94 00:37:34.049 clat (usec): min=1216, max=5809, avg=2749.17, stdev=332.10 00:37:34.049 lat (usec): min=1222, max=5815, avg=2755.44, stdev=332.08 00:37:34.049 clat percentiles (usec): 00:37:34.049 | 1.00th=[ 1975], 5.00th=[ 2278], 10.00th=[ 2474], 20.00th=[ 2606], 00:37:34.049 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:34.049 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2999], 95.00th=[ 3294], 00:37:34.049 | 99.00th=[ 4113], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[ 4883], 00:37:34.049 | 99.99th=[ 5800] 00:37:34.049 bw ( KiB/s): min=22944, max=23376, per=25.11%, avg=23169.78, stdev=155.63, samples=9 00:37:34.049 iops : min= 2868, max= 2922, avg=2896.22, stdev=19.45, samples=9 00:37:34.049 lat (msec) : 2=1.12%, 4=97.34%, 10=1.54% 00:37:34.049 cpu : usr=95.98%, sys=3.76%, ctx=6, majf=0, minf=51 00:37:34.049 IO depths : 1=0.1%, 2=0.5%, 4=72.8%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 issued rwts: total=14464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.049 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:34.049 filename1: (groupid=0, jobs=1): err= 0: pid=1386357: Fri Nov 15 12:00:59 2024 00:37:34.049 read: IOPS=2895, BW=22.6MiB/s (23.7MB/s)(113MiB/5001msec) 00:37:34.049 slat (nsec): min=5496, max=53492, avg=7681.75, stdev=2947.18 00:37:34.049 clat (usec): min=1070, max=5176, avg=2743.70, stdev=254.56 00:37:34.049 lat (usec): min=1076, max=5182, avg=2751.39, stdev=254.49 00:37:34.049 clat percentiles (usec): 00:37:34.049 | 1.00th=[ 2089], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2638], 00:37:34.049 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2737], 60.00th=[ 2737], 00:37:34.049 | 70.00th=[ 2769], 80.00th=[ 2769], 90.00th=[ 2966], 95.00th=[ 3064], 00:37:34.049 | 99.00th=[ 3785], 99.50th=[ 4080], 99.90th=[ 4621], 99.95th=[ 4883], 00:37:34.049 | 99.99th=[ 5145] 00:37:34.049 bw ( KiB/s): min=22880, max=23328, per=25.09%, avg=23148.44, stdev=152.79, samples=9 00:37:34.049 iops : min= 2860, max= 2916, avg=2893.56, stdev=19.10, samples=9 00:37:34.049 lat (msec) : 2=0.79%, 4=98.58%, 10=0.63% 00:37:34.049 cpu : usr=95.86%, sys=3.88%, ctx=33, majf=0, minf=73 00:37:34.049 IO depths : 1=0.1%, 2=0.2%, 4=70.6%, 8=29.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:34.049 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:34.049 issued rwts: total=14478,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:34.049 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:34.049 00:37:34.049 Run status group 0 (all jobs): 00:37:34.049 READ: bw=90.1MiB/s (94.5MB/s), 22.2MiB/s-22.8MiB/s (23.2MB/s-23.9MB/s), io=451MiB (473MB), run=5001-5003msec 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.049 00:37:34.049 real 0m24.583s 00:37:34.049 user 5m13.013s 00:37:34.049 sys 0m4.221s 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 ************************************ 00:37:34.049 END TEST fio_dif_rand_params 00:37:34.049 ************************************ 00:37:34.049 12:00:59 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:34.049 12:00:59 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:34.049 12:00:59 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 ************************************ 00:37:34.049 START TEST fio_dif_digest 00:37:34.049 ************************************ 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:34.049 bdev_null0 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.049 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:34.050 [2024-11-15 12:00:59.375348] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:34.050 { 00:37:34.050 "params": { 00:37:34.050 "name": "Nvme$subsystem", 00:37:34.050 "trtype": "$TEST_TRANSPORT", 00:37:34.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:34.050 "adrfam": "ipv4", 00:37:34.050 "trsvcid": "$NVMF_PORT", 00:37:34.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:34.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:34.050 "hdgst": ${hdgst:-false}, 00:37:34.050 "ddgst": ${ddgst:-false} 00:37:34.050 }, 00:37:34.050 "method": "bdev_nvme_attach_controller" 00:37:34.050 } 00:37:34.050 EOF 00:37:34.050 )") 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:34.050 "params": { 00:37:34.050 "name": "Nvme0", 00:37:34.050 "trtype": "tcp", 00:37:34.050 "traddr": "10.0.0.2", 00:37:34.050 "adrfam": "ipv4", 00:37:34.050 "trsvcid": "4420", 00:37:34.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:34.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:34.050 "hdgst": true, 00:37:34.050 "ddgst": true 00:37:34.050 }, 00:37:34.050 "method": "bdev_nvme_attach_controller" 00:37:34.050 }' 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:34.050 12:00:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:34.342 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:34.342 ... 00:37:34.342 fio-3.35 00:37:34.342 Starting 3 threads 00:37:46.646 00:37:46.646 filename0: (groupid=0, jobs=1): err= 0: pid=1387774: Fri Nov 15 12:01:10 2024 00:37:46.646 read: IOPS=273, BW=34.1MiB/s (35.8MB/s)(343MiB/10044msec) 00:37:46.646 slat (nsec): min=5866, max=49195, avg=8373.68, stdev=1905.72 00:37:46.646 clat (usec): min=7995, max=53117, avg=10955.71, stdev=2045.36 00:37:46.646 lat (usec): min=8004, max=53126, avg=10964.09, stdev=2045.58 00:37:46.646 clat percentiles (usec): 00:37:46.646 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[10028], 00:37:46.646 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:37:46.646 | 70.00th=[11207], 80.00th=[11600], 90.00th=[12256], 95.00th=[13173], 00:37:46.646 | 99.00th=[14353], 99.50th=[15008], 99.90th=[52167], 99.95th=[52691], 00:37:46.646 | 99.99th=[53216] 00:37:46.646 bw ( KiB/s): min=28416, max=36608, per=31.13%, avg=35097.60, stdev=2334.34, samples=20 00:37:46.646 iops : min= 222, max= 286, avg=274.20, stdev=18.24, samples=20 00:37:46.646 lat (msec) : 10=18.37%, 20=81.45%, 50=0.04%, 100=0.15% 00:37:46.646 cpu : usr=94.43%, sys=5.29%, ctx=29, majf=0, minf=172 00:37:46.646 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.646 issued rwts: total=2744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.646 filename0: (groupid=0, jobs=1): err= 0: pid=1387775: Fri Nov 15 12:01:10 2024 00:37:46.646 read: IOPS=287, BW=35.9MiB/s (37.7MB/s)(361MiB/10044msec) 00:37:46.646 slat (nsec): min=6147, max=36694, avg=9966.05, stdev=1714.19 00:37:46.646 clat (usec): min=6972, max=50272, avg=10413.72, stdev=1519.74 00:37:46.646 lat (usec): min=6978, max=50282, avg=10423.69, stdev=1519.86 00:37:46.646 clat percentiles (usec): 00:37:46.646 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:37:46.646 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:37:46.646 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11731], 95.00th=[12780], 00:37:46.646 | 99.00th=[14222], 99.50th=[14746], 99.90th=[16188], 99.95th=[47449], 00:37:46.646 | 99.99th=[50070] 00:37:46.646 bw ( KiB/s): min=29184, max=39168, per=32.75%, avg=36919.05, stdev=2732.94, samples=20 00:37:46.646 iops : min= 228, max= 306, avg=288.40, stdev=21.33, samples=20 00:37:46.646 lat (msec) : 10=38.81%, 20=61.12%, 50=0.03%, 100=0.03% 00:37:46.646 cpu : usr=95.44%, sys=4.28%, ctx=15, majf=0, minf=53 00:37:46.646 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.646 issued rwts: total=2886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.646 filename0: (groupid=0, jobs=1): err= 0: pid=1387776: Fri Nov 15 12:01:10 2024 00:37:46.646 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(402MiB/10002msec) 00:37:46.646 slat (nsec): min=5894, max=52596, avg=9002.58, stdev=1966.00 00:37:46.646 clat (usec): min=4219, max=14346, avg=9316.30, stdev=1064.14 00:37:46.646 lat (usec): min=4225, max=14382, avg=9325.30, stdev=1064.33 00:37:46.646 clat percentiles (usec): 00:37:46.646 | 1.00th=[ 7373], 5.00th=[ 7963], 10.00th=[ 8160], 20.00th=[ 8586], 00:37:46.646 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:37:46.646 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10552], 95.00th=[11600], 00:37:46.646 | 99.00th=[12780], 99.50th=[13173], 99.90th=[13829], 99.95th=[14353], 00:37:46.646 | 99.99th=[14353] 00:37:46.646 bw ( KiB/s): min=31744, max=43520, per=36.46%, avg=41108.21, stdev=3301.99, samples=19 00:37:46.646 iops : min= 248, max= 340, avg=321.16, stdev=25.80, samples=19 00:37:46.646 lat (msec) : 10=83.18%, 20=16.82% 00:37:46.646 cpu : usr=96.45%, sys=3.29%, ctx=31, majf=0, minf=184 00:37:46.646 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:46.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:46.646 issued rwts: total=3217,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:46.646 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:46.646 00:37:46.646 Run status group 0 (all jobs): 00:37:46.646 READ: bw=110MiB/s (115MB/s), 34.1MiB/s-40.2MiB/s (35.8MB/s-42.2MB/s), io=1106MiB (1160MB), run=10002-10044msec 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:46.646 00:37:46.646 real 0m11.208s 00:37:46.646 user 0m42.886s 00:37:46.646 sys 0m1.641s 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:46.646 12:01:10 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:46.646 ************************************ 00:37:46.646 END TEST fio_dif_digest 00:37:46.646 ************************************ 00:37:46.646 12:01:10 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:46.646 12:01:10 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:46.646 rmmod nvme_tcp 00:37:46.646 rmmod nvme_fabrics 00:37:46.646 rmmod nvme_keyring 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1377303 ']' 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1377303 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 1377303 ']' 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 1377303 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1377303 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1377303' 00:37:46.646 killing process with pid 1377303 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@971 -- # kill 1377303 00:37:46.646 12:01:10 nvmf_dif -- common/autotest_common.sh@976 -- # wait 1377303 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:46.646 12:01:10 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:49.194 Waiting for block devices as requested 00:37:49.194 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:49.194 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:49.194 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:49.194 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:49.194 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:49.194 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:49.454 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:49.454 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:49.454 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:49.714 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:49.714 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:49.714 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:49.975 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:49.975 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:49.975 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:50.236 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:50.236 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:50.497 12:01:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.497 12:01:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:50.497 12:01:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.046 12:01:18 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:53.046 00:37:53.046 real 1m18.646s 00:37:53.046 user 7m58.582s 00:37:53.046 sys 0m21.317s 00:37:53.046 12:01:18 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:53.046 12:01:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:53.046 ************************************ 00:37:53.046 END TEST nvmf_dif 00:37:53.046 ************************************ 00:37:53.046 12:01:18 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:53.046 12:01:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:53.046 12:01:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:53.046 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:37:53.046 ************************************ 00:37:53.046 START TEST nvmf_abort_qd_sizes 00:37:53.046 ************************************ 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:53.046 * Looking for test storage... 00:37:53.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:53.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.046 --rc genhtml_branch_coverage=1 00:37:53.046 --rc genhtml_function_coverage=1 00:37:53.046 --rc genhtml_legend=1 00:37:53.046 --rc geninfo_all_blocks=1 00:37:53.046 --rc geninfo_unexecuted_blocks=1 00:37:53.046 00:37:53.046 ' 00:37:53.046 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.047 --rc genhtml_branch_coverage=1 00:37:53.047 --rc genhtml_function_coverage=1 00:37:53.047 --rc genhtml_legend=1 00:37:53.047 --rc geninfo_all_blocks=1 00:37:53.047 --rc geninfo_unexecuted_blocks=1 00:37:53.047 00:37:53.047 ' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.047 --rc genhtml_branch_coverage=1 00:37:53.047 --rc genhtml_function_coverage=1 00:37:53.047 --rc genhtml_legend=1 00:37:53.047 --rc geninfo_all_blocks=1 00:37:53.047 --rc geninfo_unexecuted_blocks=1 00:37:53.047 00:37:53.047 ' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:53.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:53.047 --rc genhtml_branch_coverage=1 00:37:53.047 --rc genhtml_function_coverage=1 00:37:53.047 --rc genhtml_legend=1 00:37:53.047 --rc geninfo_all_blocks=1 00:37:53.047 --rc geninfo_unexecuted_blocks=1 00:37:53.047 00:37:53.047 ' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:53.047 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:37:53.047 12:01:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:01.195 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:01.196 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:01.196 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:01.196 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:01.196 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:01.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.556 ms 00:38:01.196 00:38:01.196 --- 10.0.0.2 ping statistics --- 00:38:01.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.196 rtt min/avg/max/mdev = 0.556/0.556/0.556/0.000 ms 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:01.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:38:01.196 00:38:01.196 --- 10.0.0.1 ping statistics --- 00:38:01.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.196 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:38:01.196 12:01:25 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:03.739 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:03.739 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:04.001 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:04.263 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:04.263 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:04.263 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:04.263 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:04.263 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:04.263 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:04.525 12:01:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:38:04.525 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1397212 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1397212 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 1397212 ']' 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:04.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:04.526 12:01:29 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:38:04.526 [2024-11-15 12:01:29.842261] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:38:04.526 [2024-11-15 12:01:29.842322] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:04.526 [2024-11-15 12:01:29.940953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:04.526 [2024-11-15 12:01:29.995339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:04.526 [2024-11-15 12:01:29.995391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:04.526 [2024-11-15 12:01:29.995400] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:04.526 [2024-11-15 12:01:29.995407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:04.526 [2024-11-15 12:01:29.995413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:04.526 [2024-11-15 12:01:29.997609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.526 [2024-11-15 12:01:29.997703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:04.526 [2024-11-15 12:01:29.997865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.526 [2024-11-15 12:01:29.997866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:38:05.471 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:05.472 12:01:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:05.472 ************************************ 00:38:05.472 START TEST spdk_target_abort 00:38:05.472 ************************************ 00:38:05.472 12:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:38:05.472 12:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:38:05.472 12:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:38:05.472 12:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.472 12:01:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.735 spdk_targetn1 00:38:05.735 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.735 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:05.735 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.735 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.735 [2024-11-15 12:01:31.085162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:05.735 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.735 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:05.736 [2024-11-15 12:01:31.137643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:05.736 12:01:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:05.996 [2024-11-15 12:01:31.414191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:472 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:05.996 [2024-11-15 12:01:31.414242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:003d p:1 m:0 dnr:0 00:38:05.996 [2024-11-15 12:01:31.422239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:688 len:8 PRP1 0x200004abe000 PRP2 0x0 00:38:05.996 [2024-11-15 12:01:31.422268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0059 p:1 m:0 dnr:0 00:38:05.996 [2024-11-15 12:01:31.460175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:1888 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:05.996 [2024-11-15 12:01:31.460208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00ef p:1 m:0 dnr:0 00:38:05.996 [2024-11-15 12:01:31.460700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:1928 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:05.996 [2024-11-15 12:01:31.460719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00f2 p:1 m:0 dnr:0 00:38:05.996 [2024-11-15 12:01:31.477212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:2456 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:38:05.996 [2024-11-15 12:01:31.477242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:38:06.256 [2024-11-15 12:01:31.506852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3296 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:06.256 [2024-11-15 12:01:31.506885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:009e p:0 m:0 dnr:0 00:38:06.256 [2024-11-15 12:01:31.530178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3984 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:38:06.256 [2024-11-15 12:01:31.530209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f4 p:0 m:0 dnr:0 00:38:06.256 [2024-11-15 12:01:31.530670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:4008 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:38:06.256 [2024-11-15 12:01:31.530688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00f6 p:0 m:0 dnr:0 00:38:09.556 Initializing NVMe Controllers 00:38:09.556 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.556 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.556 Initialization complete. Launching workers. 00:38:09.556 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11656, failed: 8 00:38:09.556 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3905, failed to submit 7759 00:38:09.556 success 687, unsuccessful 3218, failed 0 00:38:09.556 12:01:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:09.556 12:01:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.556 [2024-11-15 12:01:34.591733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:488 len:8 PRP1 0x200004e50000 PRP2 0x0 00:38:09.556 [2024-11-15 12:01:34.591773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0048 p:1 m:0 dnr:0 00:38:09.556 [2024-11-15 12:01:34.646825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:1592 len:8 PRP1 0x200004e54000 PRP2 0x0 00:38:09.556 [2024-11-15 12:01:34.646854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00cf p:1 m:0 dnr:0 00:38:09.556 [2024-11-15 12:01:34.702648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:173 nsid:1 lba:2904 len:8 PRP1 0x200004e48000 PRP2 0x0 00:38:09.556 [2024-11-15 12:01:34.702673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:173 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:38:09.817 [2024-11-15 12:01:35.258712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:15536 len:8 PRP1 0x200004e58000 PRP2 0x0 00:38:09.817 [2024-11-15 12:01:35.258743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0097 p:0 m:0 dnr:0 00:38:12.361 Initializing NVMe Controllers 00:38:12.361 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:12.361 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:12.361 Initialization complete. Launching workers. 00:38:12.361 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8694, failed: 4 00:38:12.361 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1210, failed to submit 7488 00:38:12.361 success 333, unsuccessful 877, failed 0 00:38:12.361 12:01:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:12.361 12:01:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:15.664 Initializing NVMe Controllers 00:38:15.664 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:38:15.664 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:15.664 Initialization complete. Launching workers. 00:38:15.664 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 43950, failed: 0 00:38:15.664 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2744, failed to submit 41206 00:38:15.664 success 584, unsuccessful 2160, failed 0 00:38:15.664 12:01:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:38:15.664 12:01:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.664 12:01:40 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:15.664 12:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:15.664 12:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:38:15.664 12:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:15.664 12:01:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1397212 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 1397212 ']' 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 1397212 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1397212 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1397212' 00:38:17.577 killing process with pid 1397212 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 1397212 00:38:17.577 12:01:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 1397212 00:38:17.577 00:38:17.577 real 0m12.253s 00:38:17.577 user 0m49.814s 00:38:17.577 sys 0m2.121s 00:38:17.577 12:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:17.577 12:01:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:17.577 ************************************ 00:38:17.577 END TEST spdk_target_abort 00:38:17.577 ************************************ 00:38:17.577 12:01:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:38:17.577 12:01:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:17.577 12:01:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:17.578 12:01:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:17.840 ************************************ 00:38:17.840 START TEST kernel_target_abort 00:38:17.840 ************************************ 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:38:17.840 12:01:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:21.142 Waiting for block devices as requested 00:38:21.142 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:21.142 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:21.142 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:21.403 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:21.403 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:21.403 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:21.664 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:21.664 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:21.664 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:21.924 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:21.924 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:21.924 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:22.185 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:22.185 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:22.185 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:22.446 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:22.446 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:38:22.706 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:38:22.967 No valid GPT data, bailing 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:38:22.967 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:38:22.968 00:38:22.968 Discovery Log Number of Records 2, Generation counter 2 00:38:22.968 =====Discovery Log Entry 0====== 00:38:22.968 trtype: tcp 00:38:22.968 adrfam: ipv4 00:38:22.968 subtype: current discovery subsystem 00:38:22.968 treq: not specified, sq flow control disable supported 00:38:22.968 portid: 1 00:38:22.968 trsvcid: 4420 00:38:22.968 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:38:22.968 traddr: 10.0.0.1 00:38:22.968 eflags: none 00:38:22.968 sectype: none 00:38:22.968 =====Discovery Log Entry 1====== 00:38:22.968 trtype: tcp 00:38:22.968 adrfam: ipv4 00:38:22.968 subtype: nvme subsystem 00:38:22.968 treq: not specified, sq flow control disable supported 00:38:22.968 portid: 1 00:38:22.968 trsvcid: 4420 00:38:22.968 subnqn: nqn.2016-06.io.spdk:testnqn 00:38:22.968 traddr: 10.0.0.1 00:38:22.968 eflags: none 00:38:22.968 sectype: none 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:22.968 12:01:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:26.272 Initializing NVMe Controllers 00:38:26.272 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:26.272 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:26.272 Initialization complete. Launching workers. 00:38:26.272 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67276, failed: 0 00:38:26.272 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67276, failed to submit 0 00:38:26.272 success 0, unsuccessful 67276, failed 0 00:38:26.272 12:01:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:26.272 12:01:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:29.571 Initializing NVMe Controllers 00:38:29.571 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:29.571 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:29.571 Initialization complete. Launching workers. 00:38:29.571 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 119127, failed: 0 00:38:29.571 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29986, failed to submit 89141 00:38:29.571 success 0, unsuccessful 29986, failed 0 00:38:29.571 12:01:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:29.571 12:01:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:32.873 Initializing NVMe Controllers 00:38:32.873 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:32.873 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:32.873 Initialization complete. Launching workers. 00:38:32.873 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146052, failed: 0 00:38:32.873 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36558, failed to submit 109494 00:38:32.873 success 0, unsuccessful 36558, failed 0 00:38:32.873 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:32.873 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:32.873 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:38:32.873 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:32.873 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:32.873 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:32.874 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:32.874 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:38:32.874 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:38:32.874 12:01:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:36.175 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:38:36.175 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:38:37.558 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:38:38.129 00:38:38.129 real 0m20.315s 00:38:38.129 user 0m9.869s 00:38:38.129 sys 0m6.060s 00:38:38.129 12:02:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:38.129 12:02:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:38.129 ************************************ 00:38:38.129 END TEST kernel_target_abort 00:38:38.129 ************************************ 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.129 rmmod nvme_tcp 00:38:38.129 rmmod nvme_fabrics 00:38:38.129 rmmod nvme_keyring 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1397212 ']' 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1397212 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 1397212 ']' 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 1397212 00:38:38.129 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1397212) - No such process 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 1397212 is not found' 00:38:38.129 Process with pid 1397212 is not found 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:38:38.129 12:02:03 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:41.426 Waiting for block devices as requested 00:38:41.427 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:41.687 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:41.687 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:41.687 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:41.687 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:41.948 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:41.948 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:41.948 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:42.209 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:38:42.209 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:38:42.470 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:38:42.470 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:38:42.470 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:38:42.732 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:38:42.732 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:38:42.732 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:38:42.992 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:38:43.253 12:02:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:45.167 12:02:10 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:45.167 00:38:45.167 real 0m52.565s 00:38:45.167 user 1m5.150s 00:38:45.167 sys 0m19.314s 00:38:45.167 12:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:45.167 12:02:10 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:45.167 ************************************ 00:38:45.167 END TEST nvmf_abort_qd_sizes 00:38:45.167 ************************************ 00:38:45.429 12:02:10 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:45.429 12:02:10 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:38:45.429 12:02:10 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:45.429 12:02:10 -- common/autotest_common.sh@10 -- # set +x 00:38:45.429 ************************************ 00:38:45.429 START TEST keyring_file 00:38:45.429 ************************************ 00:38:45.429 12:02:10 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:45.429 * Looking for test storage... 00:38:45.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:45.429 12:02:10 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:45.429 12:02:10 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:38:45.429 12:02:10 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:45.691 12:02:10 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:45.691 12:02:10 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:45.691 12:02:10 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:45.691 12:02:10 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:45.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.691 --rc genhtml_branch_coverage=1 00:38:45.691 --rc genhtml_function_coverage=1 00:38:45.691 --rc genhtml_legend=1 00:38:45.691 --rc geninfo_all_blocks=1 00:38:45.691 --rc geninfo_unexecuted_blocks=1 00:38:45.691 00:38:45.691 ' 00:38:45.691 12:02:10 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:45.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.691 --rc genhtml_branch_coverage=1 00:38:45.691 --rc genhtml_function_coverage=1 00:38:45.691 --rc genhtml_legend=1 00:38:45.691 --rc geninfo_all_blocks=1 00:38:45.691 --rc geninfo_unexecuted_blocks=1 00:38:45.691 00:38:45.691 ' 00:38:45.691 12:02:10 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:45.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.691 --rc genhtml_branch_coverage=1 00:38:45.691 --rc genhtml_function_coverage=1 00:38:45.691 --rc genhtml_legend=1 00:38:45.691 --rc geninfo_all_blocks=1 00:38:45.691 --rc geninfo_unexecuted_blocks=1 00:38:45.691 00:38:45.691 ' 00:38:45.691 12:02:10 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:45.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:45.691 --rc genhtml_branch_coverage=1 00:38:45.691 --rc genhtml_function_coverage=1 00:38:45.691 --rc genhtml_legend=1 00:38:45.691 --rc geninfo_all_blocks=1 00:38:45.691 --rc geninfo_unexecuted_blocks=1 00:38:45.691 00:38:45.691 ' 00:38:45.691 12:02:10 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:45.691 12:02:10 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:45.691 12:02:10 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:45.692 12:02:10 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:45.692 12:02:10 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:45.692 12:02:10 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:45.692 12:02:10 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:45.692 12:02:10 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.692 12:02:10 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.692 12:02:10 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.692 12:02:10 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:45.692 12:02:10 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@51 -- # : 0 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:45.692 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:45.692 12:02:10 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:45.692 12:02:10 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:45.692 12:02:10 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:45.692 12:02:10 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:45.692 12:02:10 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:45.692 12:02:10 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.i5P291vbwS 00:38:45.692 12:02:10 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:45.692 12:02:10 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.i5P291vbwS 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.i5P291vbwS 00:38:45.692 12:02:11 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.i5P291vbwS 00:38:45.692 12:02:11 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.tcDWJZCPi7 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:45.692 12:02:11 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:45.692 12:02:11 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:45.692 12:02:11 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:45.692 12:02:11 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:45.692 12:02:11 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:45.692 12:02:11 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.tcDWJZCPi7 00:38:45.692 12:02:11 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.tcDWJZCPi7 00:38:45.692 12:02:11 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.tcDWJZCPi7 00:38:45.692 12:02:11 keyring_file -- keyring/file.sh@30 -- # tgtpid=1407699 00:38:45.692 12:02:11 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1407699 00:38:45.692 12:02:11 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:45.692 12:02:11 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1407699 ']' 00:38:45.692 12:02:11 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:45.692 12:02:11 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:45.692 12:02:11 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:45.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:45.692 12:02:11 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:45.692 12:02:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:45.692 [2024-11-15 12:02:11.161864] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:38:45.692 [2024-11-15 12:02:11.161929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407699 ] 00:38:45.953 [2024-11-15 12:02:11.250599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.953 [2024-11-15 12:02:11.287801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.523 12:02:11 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:46.523 12:02:11 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:46.523 12:02:11 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:46.523 12:02:11 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.523 12:02:11 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:46.523 [2024-11-15 12:02:11.959995] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:46.523 null0 00:38:46.523 [2024-11-15 12:02:11.992038] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:46.523 [2024-11-15 12:02:11.992272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:46.523 12:02:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:46.523 12:02:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:46.784 [2024-11-15 12:02:12.024110] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:46.784 request: 00:38:46.784 { 00:38:46.784 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:46.784 "secure_channel": false, 00:38:46.784 "listen_address": { 00:38:46.784 "trtype": "tcp", 00:38:46.784 "traddr": "127.0.0.1", 00:38:46.784 "trsvcid": "4420" 00:38:46.784 }, 00:38:46.784 "method": "nvmf_subsystem_add_listener", 00:38:46.784 "req_id": 1 00:38:46.784 } 00:38:46.784 Got JSON-RPC error response 00:38:46.784 response: 00:38:46.784 { 00:38:46.784 "code": -32602, 00:38:46.784 "message": "Invalid parameters" 00:38:46.784 } 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:46.784 12:02:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=1407763 00:38:46.784 12:02:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1407763 /var/tmp/bperf.sock 00:38:46.784 12:02:12 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1407763 ']' 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:46.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:46.784 12:02:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:46.784 [2024-11-15 12:02:12.083741] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:38:46.784 [2024-11-15 12:02:12.083787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1407763 ] 00:38:46.784 [2024-11-15 12:02:12.169674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.784 [2024-11-15 12:02:12.206412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:47.728 12:02:12 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:47.728 12:02:12 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:47.728 12:02:12 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:47.728 12:02:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:47.728 12:02:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tcDWJZCPi7 00:38:47.728 12:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tcDWJZCPi7 00:38:47.989 12:02:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:47.989 12:02:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:47.989 12:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:47.989 12:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:47.989 12:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:47.989 12:02:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.i5P291vbwS == \/\t\m\p\/\t\m\p\.\i\5\P\2\9\1\v\b\w\S ]] 00:38:47.989 12:02:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:47.989 12:02:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:47.989 12:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:47.989 12:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:47.989 12:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.250 12:02:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.tcDWJZCPi7 == \/\t\m\p\/\t\m\p\.\t\c\D\W\J\Z\C\P\i\7 ]] 00:38:48.250 12:02:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:48.250 12:02:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:48.250 12:02:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.250 12:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.250 12:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:48.250 12:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.510 12:02:13 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:48.510 12:02:13 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:48.510 12:02:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:48.510 12:02:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.510 12:02:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.510 12:02:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.510 12:02:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:48.771 12:02:14 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:48.771 12:02:14 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:48.771 12:02:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:48.771 [2024-11-15 12:02:14.150911] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:48.771 nvme0n1 00:38:48.771 12:02:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:48.771 12:02:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:48.771 12:02:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:48.771 12:02:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:48.771 12:02:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:48.771 12:02:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:49.031 12:02:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:49.031 12:02:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:49.031 12:02:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:49.031 12:02:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:49.031 12:02:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:49.031 12:02:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:49.031 12:02:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:49.292 12:02:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:49.292 12:02:14 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:49.292 Running I/O for 1 seconds... 00:38:50.232 16788.00 IOPS, 65.58 MiB/s 00:38:50.232 Latency(us) 00:38:50.232 [2024-11-15T11:02:15.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.232 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:50.232 nvme0n1 : 1.00 16852.14 65.83 0.00 0.00 7581.35 2389.33 19442.35 00:38:50.232 [2024-11-15T11:02:15.730Z] =================================================================================================================== 00:38:50.232 [2024-11-15T11:02:15.730Z] Total : 16852.14 65.83 0.00 0.00 7581.35 2389.33 19442.35 00:38:50.232 { 00:38:50.232 "results": [ 00:38:50.232 { 00:38:50.232 "job": "nvme0n1", 00:38:50.232 "core_mask": "0x2", 00:38:50.232 "workload": "randrw", 00:38:50.232 "percentage": 50, 00:38:50.232 "status": "finished", 00:38:50.232 "queue_depth": 128, 00:38:50.232 "io_size": 4096, 00:38:50.233 "runtime": 1.003908, 00:38:50.233 "iops": 16852.14182972942, 00:38:50.233 "mibps": 65.82867902238054, 00:38:50.233 "io_failed": 0, 00:38:50.233 "io_timeout": 0, 00:38:50.233 "avg_latency_us": 7581.351075383221, 00:38:50.233 "min_latency_us": 2389.3333333333335, 00:38:50.233 "max_latency_us": 19442.346666666668 00:38:50.233 } 00:38:50.233 ], 00:38:50.233 "core_count": 1 00:38:50.233 } 00:38:50.233 12:02:15 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:50.233 12:02:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:50.493 12:02:15 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:50.493 12:02:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:50.493 12:02:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.493 12:02:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.493 12:02:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:50.493 12:02:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:50.753 12:02:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:50.753 12:02:16 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:50.753 12:02:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:50.753 12:02:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:50.753 12:02:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:50.754 12:02:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:50.754 12:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.014 12:02:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:51.014 12:02:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:51.014 12:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:51.014 [2024-11-15 12:02:16.415406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:51.014 [2024-11-15 12:02:16.415832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c740 (107): Transport endpoint is not connected 00:38:51.014 [2024-11-15 12:02:16.416827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c740 (9): Bad file descriptor 00:38:51.014 [2024-11-15 12:02:16.417829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:51.014 [2024-11-15 12:02:16.417836] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:51.014 [2024-11-15 12:02:16.417842] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:51.014 [2024-11-15 12:02:16.417851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:51.014 request: 00:38:51.014 { 00:38:51.014 "name": "nvme0", 00:38:51.014 "trtype": "tcp", 00:38:51.014 "traddr": "127.0.0.1", 00:38:51.014 "adrfam": "ipv4", 00:38:51.014 "trsvcid": "4420", 00:38:51.014 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:51.014 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:51.014 "prchk_reftag": false, 00:38:51.014 "prchk_guard": false, 00:38:51.014 "hdgst": false, 00:38:51.014 "ddgst": false, 00:38:51.014 "psk": "key1", 00:38:51.014 "allow_unrecognized_csi": false, 00:38:51.014 "method": "bdev_nvme_attach_controller", 00:38:51.014 "req_id": 1 00:38:51.014 } 00:38:51.014 Got JSON-RPC error response 00:38:51.014 response: 00:38:51.014 { 00:38:51.014 "code": -5, 00:38:51.014 "message": "Input/output error" 00:38:51.014 } 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:51.014 12:02:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:51.014 12:02:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:51.014 12:02:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:51.014 12:02:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:51.014 12:02:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.014 12:02:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:51.014 12:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.274 12:02:16 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:51.274 12:02:16 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:51.274 12:02:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:51.274 12:02:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:51.274 12:02:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:51.274 12:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:51.274 12:02:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:51.535 12:02:16 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:51.535 12:02:16 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:51.535 12:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:51.535 12:02:16 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:51.535 12:02:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:51.795 12:02:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:51.795 12:02:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:51.795 12:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.056 12:02:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:52.056 12:02:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.i5P291vbwS 00:38:52.056 12:02:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:52.056 12:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:52.056 [2024-11-15 12:02:17.462577] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.i5P291vbwS': 0100660 00:38:52.056 [2024-11-15 12:02:17.462600] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:52.056 request: 00:38:52.056 { 00:38:52.056 "name": "key0", 00:38:52.056 "path": "/tmp/tmp.i5P291vbwS", 00:38:52.056 "method": "keyring_file_add_key", 00:38:52.056 "req_id": 1 00:38:52.056 } 00:38:52.056 Got JSON-RPC error response 00:38:52.056 response: 00:38:52.056 { 00:38:52.056 "code": -1, 00:38:52.056 "message": "Operation not permitted" 00:38:52.056 } 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:52.056 12:02:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:52.056 12:02:17 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.i5P291vbwS 00:38:52.056 12:02:17 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:52.056 12:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.i5P291vbwS 00:38:52.317 12:02:17 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.i5P291vbwS 00:38:52.317 12:02:17 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:52.317 12:02:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:52.317 12:02:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:52.317 12:02:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:52.317 12:02:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:52.317 12:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:52.584 12:02:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:52.584 12:02:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:38:52.584 12:02:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.584 12:02:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:52.584 [2024-11-15 12:02:18.007962] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.i5P291vbwS': No such file or directory 00:38:52.584 [2024-11-15 12:02:18.007976] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:52.584 [2024-11-15 12:02:18.007989] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:52.584 [2024-11-15 12:02:18.007994] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:52.584 [2024-11-15 12:02:18.007999] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:52.584 [2024-11-15 12:02:18.008004] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:52.584 request: 00:38:52.584 { 00:38:52.584 "name": "nvme0", 00:38:52.584 "trtype": "tcp", 00:38:52.584 "traddr": "127.0.0.1", 00:38:52.584 "adrfam": "ipv4", 00:38:52.584 "trsvcid": "4420", 00:38:52.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:52.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:52.584 "prchk_reftag": false, 00:38:52.584 "prchk_guard": false, 00:38:52.584 "hdgst": false, 00:38:52.584 "ddgst": false, 00:38:52.584 "psk": "key0", 00:38:52.584 "allow_unrecognized_csi": false, 00:38:52.584 "method": "bdev_nvme_attach_controller", 00:38:52.584 "req_id": 1 00:38:52.584 } 00:38:52.584 Got JSON-RPC error response 00:38:52.584 response: 00:38:52.584 { 00:38:52.584 "code": -19, 00:38:52.584 "message": "No such device" 00:38:52.584 } 00:38:52.584 12:02:18 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:38:52.584 12:02:18 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:38:52.584 12:02:18 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:38:52.584 12:02:18 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:38:52.584 12:02:18 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:52.584 12:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:52.898 12:02:18 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.fk3vuY71AC 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:52.898 12:02:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:52.898 12:02:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:38:52.898 12:02:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:52.898 12:02:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:52.898 12:02:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:38:52.898 12:02:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.fk3vuY71AC 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.fk3vuY71AC 00:38:52.898 12:02:18 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.fk3vuY71AC 00:38:52.898 12:02:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fk3vuY71AC 00:38:52.898 12:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fk3vuY71AC 00:38:53.194 12:02:18 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:53.194 12:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:53.194 nvme0n1 00:38:53.194 12:02:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:53.194 12:02:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:53.194 12:02:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:53.194 12:02:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:53.194 12:02:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:53.194 12:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.472 12:02:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:53.472 12:02:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:53.472 12:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:53.733 12:02:18 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:53.733 12:02:18 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:53.733 12:02:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:53.733 12:02:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:53.733 12:02:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.733 12:02:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:53.734 12:02:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:53.734 12:02:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:53.734 12:02:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:53.734 12:02:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:53.734 12:02:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:53.734 12:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:53.994 12:02:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:53.994 12:02:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:53.994 12:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:54.253 12:02:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:54.253 12:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:54.253 12:02:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:54.253 12:02:19 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:54.253 12:02:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.fk3vuY71AC 00:38:54.253 12:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.fk3vuY71AC 00:38:54.514 12:02:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.tcDWJZCPi7 00:38:54.514 12:02:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.tcDWJZCPi7 00:38:54.775 12:02:20 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.775 12:02:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:54.775 nvme0n1 00:38:55.037 12:02:20 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:55.037 12:02:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:55.037 12:02:20 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:55.037 "subsystems": [ 00:38:55.037 { 00:38:55.037 "subsystem": "keyring", 00:38:55.037 "config": [ 00:38:55.037 { 00:38:55.037 "method": "keyring_file_add_key", 00:38:55.037 "params": { 00:38:55.037 "name": "key0", 00:38:55.037 "path": "/tmp/tmp.fk3vuY71AC" 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "keyring_file_add_key", 00:38:55.037 "params": { 00:38:55.037 "name": "key1", 00:38:55.037 "path": "/tmp/tmp.tcDWJZCPi7" 00:38:55.037 } 00:38:55.037 } 00:38:55.037 ] 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "subsystem": "iobuf", 00:38:55.037 "config": [ 00:38:55.037 { 00:38:55.037 "method": "iobuf_set_options", 00:38:55.037 "params": { 00:38:55.037 "small_pool_count": 8192, 00:38:55.037 "large_pool_count": 1024, 00:38:55.037 "small_bufsize": 8192, 00:38:55.037 "large_bufsize": 135168, 00:38:55.037 "enable_numa": false 00:38:55.037 } 00:38:55.037 } 00:38:55.037 ] 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "subsystem": "sock", 00:38:55.037 "config": [ 00:38:55.037 { 00:38:55.037 "method": "sock_set_default_impl", 00:38:55.037 "params": { 00:38:55.037 "impl_name": "posix" 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "sock_impl_set_options", 00:38:55.037 "params": { 00:38:55.037 "impl_name": "ssl", 00:38:55.037 "recv_buf_size": 4096, 00:38:55.037 "send_buf_size": 4096, 00:38:55.037 "enable_recv_pipe": true, 00:38:55.037 "enable_quickack": false, 00:38:55.037 "enable_placement_id": 0, 00:38:55.037 "enable_zerocopy_send_server": true, 00:38:55.037 "enable_zerocopy_send_client": false, 00:38:55.037 "zerocopy_threshold": 0, 00:38:55.037 "tls_version": 0, 00:38:55.037 "enable_ktls": false 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "sock_impl_set_options", 00:38:55.037 "params": { 00:38:55.037 "impl_name": "posix", 00:38:55.037 "recv_buf_size": 2097152, 00:38:55.037 "send_buf_size": 2097152, 00:38:55.037 "enable_recv_pipe": true, 00:38:55.037 "enable_quickack": false, 00:38:55.037 "enable_placement_id": 0, 00:38:55.037 "enable_zerocopy_send_server": true, 00:38:55.037 "enable_zerocopy_send_client": false, 00:38:55.037 "zerocopy_threshold": 0, 00:38:55.037 "tls_version": 0, 00:38:55.037 "enable_ktls": false 00:38:55.037 } 00:38:55.037 } 00:38:55.037 ] 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "subsystem": "vmd", 00:38:55.037 "config": [] 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "subsystem": "accel", 00:38:55.037 "config": [ 00:38:55.037 { 00:38:55.037 "method": "accel_set_options", 00:38:55.037 "params": { 00:38:55.037 "small_cache_size": 128, 00:38:55.037 "large_cache_size": 16, 00:38:55.037 "task_count": 2048, 00:38:55.037 "sequence_count": 2048, 00:38:55.037 "buf_count": 2048 00:38:55.037 } 00:38:55.037 } 00:38:55.037 ] 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "subsystem": "bdev", 00:38:55.037 "config": [ 00:38:55.037 { 00:38:55.037 "method": "bdev_set_options", 00:38:55.037 "params": { 00:38:55.037 "bdev_io_pool_size": 65535, 00:38:55.037 "bdev_io_cache_size": 256, 00:38:55.037 "bdev_auto_examine": true, 00:38:55.037 "iobuf_small_cache_size": 128, 00:38:55.037 "iobuf_large_cache_size": 16 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "bdev_raid_set_options", 00:38:55.037 "params": { 00:38:55.037 "process_window_size_kb": 1024, 00:38:55.037 "process_max_bandwidth_mb_sec": 0 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "bdev_iscsi_set_options", 00:38:55.037 "params": { 00:38:55.037 "timeout_sec": 30 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "bdev_nvme_set_options", 00:38:55.037 "params": { 00:38:55.037 "action_on_timeout": "none", 00:38:55.037 "timeout_us": 0, 00:38:55.037 "timeout_admin_us": 0, 00:38:55.037 "keep_alive_timeout_ms": 10000, 00:38:55.037 "arbitration_burst": 0, 00:38:55.037 "low_priority_weight": 0, 00:38:55.037 "medium_priority_weight": 0, 00:38:55.037 "high_priority_weight": 0, 00:38:55.037 "nvme_adminq_poll_period_us": 10000, 00:38:55.037 "nvme_ioq_poll_period_us": 0, 00:38:55.037 "io_queue_requests": 512, 00:38:55.037 "delay_cmd_submit": true, 00:38:55.037 "transport_retry_count": 4, 00:38:55.037 "bdev_retry_count": 3, 00:38:55.037 "transport_ack_timeout": 0, 00:38:55.037 "ctrlr_loss_timeout_sec": 0, 00:38:55.037 "reconnect_delay_sec": 0, 00:38:55.037 "fast_io_fail_timeout_sec": 0, 00:38:55.037 "disable_auto_failback": false, 00:38:55.037 "generate_uuids": false, 00:38:55.037 "transport_tos": 0, 00:38:55.037 "nvme_error_stat": false, 00:38:55.037 "rdma_srq_size": 0, 00:38:55.037 "io_path_stat": false, 00:38:55.037 "allow_accel_sequence": false, 00:38:55.037 "rdma_max_cq_size": 0, 00:38:55.037 "rdma_cm_event_timeout_ms": 0, 00:38:55.037 "dhchap_digests": [ 00:38:55.037 "sha256", 00:38:55.037 "sha384", 00:38:55.037 "sha512" 00:38:55.037 ], 00:38:55.037 "dhchap_dhgroups": [ 00:38:55.037 "null", 00:38:55.037 "ffdhe2048", 00:38:55.037 "ffdhe3072", 00:38:55.037 "ffdhe4096", 00:38:55.037 "ffdhe6144", 00:38:55.037 "ffdhe8192" 00:38:55.037 ] 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "bdev_nvme_attach_controller", 00:38:55.037 "params": { 00:38:55.037 "name": "nvme0", 00:38:55.037 "trtype": "TCP", 00:38:55.037 "adrfam": "IPv4", 00:38:55.037 "traddr": "127.0.0.1", 00:38:55.037 "trsvcid": "4420", 00:38:55.037 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.037 "prchk_reftag": false, 00:38:55.037 "prchk_guard": false, 00:38:55.037 "ctrlr_loss_timeout_sec": 0, 00:38:55.037 "reconnect_delay_sec": 0, 00:38:55.037 "fast_io_fail_timeout_sec": 0, 00:38:55.037 "psk": "key0", 00:38:55.037 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:55.037 "hdgst": false, 00:38:55.037 "ddgst": false, 00:38:55.037 "multipath": "multipath" 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "bdev_nvme_set_hotplug", 00:38:55.037 "params": { 00:38:55.037 "period_us": 100000, 00:38:55.037 "enable": false 00:38:55.037 } 00:38:55.037 }, 00:38:55.037 { 00:38:55.037 "method": "bdev_wait_for_examine" 00:38:55.038 } 00:38:55.038 ] 00:38:55.038 }, 00:38:55.038 { 00:38:55.038 "subsystem": "nbd", 00:38:55.038 "config": [] 00:38:55.038 } 00:38:55.038 ] 00:38:55.038 }' 00:38:55.038 12:02:20 keyring_file -- keyring/file.sh@115 -- # killprocess 1407763 00:38:55.038 12:02:20 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1407763 ']' 00:38:55.038 12:02:20 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1407763 00:38:55.038 12:02:20 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:55.038 12:02:20 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:55.038 12:02:20 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1407763 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1407763' 00:38:55.299 killing process with pid 1407763 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@971 -- # kill 1407763 00:38:55.299 Received shutdown signal, test time was about 1.000000 seconds 00:38:55.299 00:38:55.299 Latency(us) 00:38:55.299 [2024-11-15T11:02:20.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.299 [2024-11-15T11:02:20.797Z] =================================================================================================================== 00:38:55.299 [2024-11-15T11:02:20.797Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@976 -- # wait 1407763 00:38:55.299 12:02:20 keyring_file -- keyring/file.sh@118 -- # bperfpid=1409576 00:38:55.299 12:02:20 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1409576 /var/tmp/bperf.sock 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 1409576 ']' 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:55.299 12:02:20 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:55.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:55.299 12:02:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:55.299 12:02:20 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:55.299 "subsystems": [ 00:38:55.299 { 00:38:55.299 "subsystem": "keyring", 00:38:55.299 "config": [ 00:38:55.299 { 00:38:55.299 "method": "keyring_file_add_key", 00:38:55.299 "params": { 00:38:55.299 "name": "key0", 00:38:55.299 "path": "/tmp/tmp.fk3vuY71AC" 00:38:55.299 } 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "method": "keyring_file_add_key", 00:38:55.299 "params": { 00:38:55.299 "name": "key1", 00:38:55.299 "path": "/tmp/tmp.tcDWJZCPi7" 00:38:55.299 } 00:38:55.299 } 00:38:55.299 ] 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "subsystem": "iobuf", 00:38:55.299 "config": [ 00:38:55.299 { 00:38:55.299 "method": "iobuf_set_options", 00:38:55.299 "params": { 00:38:55.299 "small_pool_count": 8192, 00:38:55.299 "large_pool_count": 1024, 00:38:55.299 "small_bufsize": 8192, 00:38:55.299 "large_bufsize": 135168, 00:38:55.299 "enable_numa": false 00:38:55.299 } 00:38:55.299 } 00:38:55.299 ] 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "subsystem": "sock", 00:38:55.299 "config": [ 00:38:55.299 { 00:38:55.299 "method": "sock_set_default_impl", 00:38:55.299 "params": { 00:38:55.299 "impl_name": "posix" 00:38:55.299 } 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "method": "sock_impl_set_options", 00:38:55.299 "params": { 00:38:55.299 "impl_name": "ssl", 00:38:55.299 "recv_buf_size": 4096, 00:38:55.299 "send_buf_size": 4096, 00:38:55.299 "enable_recv_pipe": true, 00:38:55.299 "enable_quickack": false, 00:38:55.299 "enable_placement_id": 0, 00:38:55.299 "enable_zerocopy_send_server": true, 00:38:55.299 "enable_zerocopy_send_client": false, 00:38:55.299 "zerocopy_threshold": 0, 00:38:55.299 "tls_version": 0, 00:38:55.299 "enable_ktls": false 00:38:55.299 } 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "method": "sock_impl_set_options", 00:38:55.299 "params": { 00:38:55.299 "impl_name": "posix", 00:38:55.299 "recv_buf_size": 2097152, 00:38:55.299 "send_buf_size": 2097152, 00:38:55.299 "enable_recv_pipe": true, 00:38:55.299 "enable_quickack": false, 00:38:55.299 "enable_placement_id": 0, 00:38:55.299 "enable_zerocopy_send_server": true, 00:38:55.299 "enable_zerocopy_send_client": false, 00:38:55.299 "zerocopy_threshold": 0, 00:38:55.299 "tls_version": 0, 00:38:55.299 "enable_ktls": false 00:38:55.299 } 00:38:55.299 } 00:38:55.299 ] 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "subsystem": "vmd", 00:38:55.299 "config": [] 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "subsystem": "accel", 00:38:55.299 "config": [ 00:38:55.299 { 00:38:55.299 "method": "accel_set_options", 00:38:55.299 "params": { 00:38:55.299 "small_cache_size": 128, 00:38:55.299 "large_cache_size": 16, 00:38:55.299 "task_count": 2048, 00:38:55.299 "sequence_count": 2048, 00:38:55.299 "buf_count": 2048 00:38:55.299 } 00:38:55.299 } 00:38:55.299 ] 00:38:55.299 }, 00:38:55.299 { 00:38:55.299 "subsystem": "bdev", 00:38:55.299 "config": [ 00:38:55.300 { 00:38:55.300 "method": "bdev_set_options", 00:38:55.300 "params": { 00:38:55.300 "bdev_io_pool_size": 65535, 00:38:55.300 "bdev_io_cache_size": 256, 00:38:55.300 "bdev_auto_examine": true, 00:38:55.300 "iobuf_small_cache_size": 128, 00:38:55.300 "iobuf_large_cache_size": 16 00:38:55.300 } 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "method": "bdev_raid_set_options", 00:38:55.300 "params": { 00:38:55.300 "process_window_size_kb": 1024, 00:38:55.300 "process_max_bandwidth_mb_sec": 0 00:38:55.300 } 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "method": "bdev_iscsi_set_options", 00:38:55.300 "params": { 00:38:55.300 "timeout_sec": 30 00:38:55.300 } 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "method": "bdev_nvme_set_options", 00:38:55.300 "params": { 00:38:55.300 "action_on_timeout": "none", 00:38:55.300 "timeout_us": 0, 00:38:55.300 "timeout_admin_us": 0, 00:38:55.300 "keep_alive_timeout_ms": 10000, 00:38:55.300 "arbitration_burst": 0, 00:38:55.300 "low_priority_weight": 0, 00:38:55.300 "medium_priority_weight": 0, 00:38:55.300 "high_priority_weight": 0, 00:38:55.300 "nvme_adminq_poll_period_us": 10000, 00:38:55.300 "nvme_ioq_poll_period_us": 0, 00:38:55.300 "io_queue_requests": 512, 00:38:55.300 "delay_cmd_submit": true, 00:38:55.300 "transport_retry_count": 4, 00:38:55.300 "bdev_retry_count": 3, 00:38:55.300 "transport_ack_timeout": 0, 00:38:55.300 "ctrlr_loss_timeout_sec": 0, 00:38:55.300 "reconnect_delay_sec": 0, 00:38:55.300 "fast_io_fail_timeout_sec": 0, 00:38:55.300 "disable_auto_failback": false, 00:38:55.300 "generate_uuids": false, 00:38:55.300 "transport_tos": 0, 00:38:55.300 "nvme_error_stat": false, 00:38:55.300 "rdma_srq_size": 0, 00:38:55.300 "io_path_stat": false, 00:38:55.300 "allow_accel_sequence": false, 00:38:55.300 "rdma_max_cq_size": 0, 00:38:55.300 "rdma_cm_event_timeout_ms": 0, 00:38:55.300 "dhchap_digests": [ 00:38:55.300 "sha256", 00:38:55.300 "sha384", 00:38:55.300 "sha512" 00:38:55.300 ], 00:38:55.300 "dhchap_dhgroups": [ 00:38:55.300 "null", 00:38:55.300 "ffdhe2048", 00:38:55.300 "ffdhe3072", 00:38:55.300 "ffdhe4096", 00:38:55.300 "ffdhe6144", 00:38:55.300 "ffdhe8192" 00:38:55.300 ] 00:38:55.300 } 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "method": "bdev_nvme_attach_controller", 00:38:55.300 "params": { 00:38:55.300 "name": "nvme0", 00:38:55.300 "trtype": "TCP", 00:38:55.300 "adrfam": "IPv4", 00:38:55.300 "traddr": "127.0.0.1", 00:38:55.300 "trsvcid": "4420", 00:38:55.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:55.300 "prchk_reftag": false, 00:38:55.300 "prchk_guard": false, 00:38:55.300 "ctrlr_loss_timeout_sec": 0, 00:38:55.300 "reconnect_delay_sec": 0, 00:38:55.300 "fast_io_fail_timeout_sec": 0, 00:38:55.300 "psk": "key0", 00:38:55.300 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:55.300 "hdgst": false, 00:38:55.300 "ddgst": false, 00:38:55.300 "multipath": "multipath" 00:38:55.300 } 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "method": "bdev_nvme_set_hotplug", 00:38:55.300 "params": { 00:38:55.300 "period_us": 100000, 00:38:55.300 "enable": false 00:38:55.300 } 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "method": "bdev_wait_for_examine" 00:38:55.300 } 00:38:55.300 ] 00:38:55.300 }, 00:38:55.300 { 00:38:55.300 "subsystem": "nbd", 00:38:55.300 "config": [] 00:38:55.300 } 00:38:55.300 ] 00:38:55.300 }' 00:38:55.300 [2024-11-15 12:02:20.733770] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:38:55.300 [2024-11-15 12:02:20.733830] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1409576 ] 00:38:55.561 [2024-11-15 12:02:20.816551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.561 [2024-11-15 12:02:20.845777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:55.561 [2024-11-15 12:02:20.989607] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:56.132 12:02:21 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:56.132 12:02:21 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:38:56.132 12:02:21 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:56.132 12:02:21 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:56.132 12:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.393 12:02:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:56.393 12:02:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:56.393 12:02:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:56.393 12:02:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.393 12:02:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.393 12:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.393 12:02:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:56.654 12:02:21 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:56.654 12:02:21 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:56.654 12:02:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:56.654 12:02:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:56.654 12:02:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:56.654 12:02:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:56.654 12:02:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:56.654 12:02:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:56.654 12:02:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:56.654 12:02:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:56.654 12:02:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:56.915 12:02:22 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:56.915 12:02:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:56.915 12:02:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.fk3vuY71AC /tmp/tmp.tcDWJZCPi7 00:38:56.915 12:02:22 keyring_file -- keyring/file.sh@20 -- # killprocess 1409576 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1409576 ']' 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1409576 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1409576 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1409576' 00:38:56.915 killing process with pid 1409576 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@971 -- # kill 1409576 00:38:56.915 Received shutdown signal, test time was about 1.000000 seconds 00:38:56.915 00:38:56.915 Latency(us) 00:38:56.915 [2024-11-15T11:02:22.413Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:56.915 [2024-11-15T11:02:22.413Z] =================================================================================================================== 00:38:56.915 [2024-11-15T11:02:22.413Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:56.915 12:02:22 keyring_file -- common/autotest_common.sh@976 -- # wait 1409576 00:38:57.175 12:02:22 keyring_file -- keyring/file.sh@21 -- # killprocess 1407699 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 1407699 ']' 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@956 -- # kill -0 1407699 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@957 -- # uname 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1407699 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1407699' 00:38:57.175 killing process with pid 1407699 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@971 -- # kill 1407699 00:38:57.175 12:02:22 keyring_file -- common/autotest_common.sh@976 -- # wait 1407699 00:38:57.436 00:38:57.436 real 0m11.940s 00:38:57.436 user 0m28.859s 00:38:57.436 sys 0m2.649s 00:38:57.436 12:02:22 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:38:57.436 12:02:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:57.436 ************************************ 00:38:57.436 END TEST keyring_file 00:38:57.436 ************************************ 00:38:57.436 12:02:22 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:38:57.436 12:02:22 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:57.436 12:02:22 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:38:57.436 12:02:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:38:57.436 12:02:22 -- common/autotest_common.sh@10 -- # set +x 00:38:57.436 ************************************ 00:38:57.436 START TEST keyring_linux 00:38:57.436 ************************************ 00:38:57.436 12:02:22 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:57.436 Joined session keyring: 816458528 00:38:57.436 * Looking for test storage... 00:38:57.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:57.436 12:02:22 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:38:57.436 12:02:22 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:38:57.436 12:02:22 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:38:57.698 12:02:22 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:57.698 12:02:22 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:57.698 12:02:22 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:38:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.698 --rc genhtml_branch_coverage=1 00:38:57.698 --rc genhtml_function_coverage=1 00:38:57.698 --rc genhtml_legend=1 00:38:57.698 --rc geninfo_all_blocks=1 00:38:57.698 --rc geninfo_unexecuted_blocks=1 00:38:57.698 00:38:57.698 ' 00:38:57.698 12:02:22 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:38:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.698 --rc genhtml_branch_coverage=1 00:38:57.698 --rc genhtml_function_coverage=1 00:38:57.698 --rc genhtml_legend=1 00:38:57.698 --rc geninfo_all_blocks=1 00:38:57.698 --rc geninfo_unexecuted_blocks=1 00:38:57.698 00:38:57.698 ' 00:38:57.698 12:02:22 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:38:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.698 --rc genhtml_branch_coverage=1 00:38:57.698 --rc genhtml_function_coverage=1 00:38:57.698 --rc genhtml_legend=1 00:38:57.698 --rc geninfo_all_blocks=1 00:38:57.698 --rc geninfo_unexecuted_blocks=1 00:38:57.698 00:38:57.698 ' 00:38:57.698 12:02:22 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:38:57.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:57.698 --rc genhtml_branch_coverage=1 00:38:57.698 --rc genhtml_function_coverage=1 00:38:57.698 --rc genhtml_legend=1 00:38:57.698 --rc geninfo_all_blocks=1 00:38:57.698 --rc geninfo_unexecuted_blocks=1 00:38:57.698 00:38:57.698 ' 00:38:57.698 12:02:22 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:57.698 12:02:22 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:57.698 12:02:22 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:57.698 12:02:22 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.698 12:02:22 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.698 12:02:22 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.698 12:02:22 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:57.698 12:02:22 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:38:57.698 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:57.698 12:02:22 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:57.698 12:02:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:57.698 12:02:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:57.698 12:02:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:57.698 12:02:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:57.698 12:02:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:57.698 12:02:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:57.699 12:02:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:57.699 /tmp/:spdk-test:key0 00:38:57.699 12:02:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:38:57.699 12:02:23 keyring_linux -- nvmf/common.sh@733 -- # python - 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:57.699 12:02:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:57.699 /tmp/:spdk-test:key1 00:38:57.699 12:02:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1410013 00:38:57.699 12:02:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1410013 00:38:57.699 12:02:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:57.699 12:02:23 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1410013 ']' 00:38:57.699 12:02:23 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:57.699 12:02:23 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:57.699 12:02:23 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:57.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:57.699 12:02:23 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:57.699 12:02:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:57.699 [2024-11-15 12:02:23.149790] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:38:57.699 [2024-11-15 12:02:23.149860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410013 ] 00:38:57.960 [2024-11-15 12:02:23.237835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.960 [2024-11-15 12:02:23.272914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:58.531 12:02:23 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:58.531 12:02:23 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:58.531 12:02:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:58.531 12:02:23 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:38:58.531 12:02:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:58.531 [2024-11-15 12:02:23.943845] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:58.531 null0 00:38:58.531 [2024-11-15 12:02:23.975904] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:58.531 [2024-11-15 12:02:23.976250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:58.531 12:02:23 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:38:58.531 12:02:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:58.531 193295913 00:38:58.531 12:02:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:58.531 137757820 00:38:58.531 12:02:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1410349 00:38:58.531 12:02:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1410349 /var/tmp/bperf.sock 00:38:58.531 12:02:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:58.531 12:02:24 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 1410349 ']' 00:38:58.531 12:02:24 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:58.531 12:02:24 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:38:58.531 12:02:24 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:58.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:58.531 12:02:24 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:38:58.531 12:02:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:58.792 [2024-11-15 12:02:24.052203] Starting SPDK v25.01-pre git sha1 dec6d3843 / DPDK 24.03.0 initialization... 00:38:58.792 [2024-11-15 12:02:24.052251] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1410349 ] 00:38:58.792 [2024-11-15 12:02:24.135478] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:58.792 [2024-11-15 12:02:24.165421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:59.365 12:02:24 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:38:59.365 12:02:24 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:38:59.365 12:02:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:59.365 12:02:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:59.627 12:02:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:59.627 12:02:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:59.888 12:02:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:59.888 12:02:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:39:00.149 [2024-11-15 12:02:25.386777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:39:00.149 nvme0n1 00:39:00.149 12:02:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:39:00.149 12:02:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:39:00.149 12:02:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:00.149 12:02:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:00.149 12:02:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:00.149 12:02:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:39:00.409 12:02:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:39:00.409 12:02:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:00.409 12:02:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@25 -- # sn=193295913 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@26 -- # [[ 193295913 == \1\9\3\2\9\5\9\1\3 ]] 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 193295913 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:39:00.409 12:02:25 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:39:00.669 Running I/O for 1 seconds... 00:39:01.610 24285.00 IOPS, 94.86 MiB/s 00:39:01.610 Latency(us) 00:39:01.610 [2024-11-15T11:02:27.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:01.610 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:39:01.610 nvme0n1 : 1.01 24285.02 94.86 0.00 0.00 5254.92 4096.00 9175.04 00:39:01.610 [2024-11-15T11:02:27.108Z] =================================================================================================================== 00:39:01.610 [2024-11-15T11:02:27.108Z] Total : 24285.02 94.86 0.00 0.00 5254.92 4096.00 9175.04 00:39:01.610 { 00:39:01.610 "results": [ 00:39:01.610 { 00:39:01.610 "job": "nvme0n1", 00:39:01.610 "core_mask": "0x2", 00:39:01.610 "workload": "randread", 00:39:01.610 "status": "finished", 00:39:01.610 "queue_depth": 128, 00:39:01.610 "io_size": 4096, 00:39:01.610 "runtime": 1.005311, 00:39:01.610 "iops": 24285.022246846995, 00:39:01.610 "mibps": 94.86336815174607, 00:39:01.610 "io_failed": 0, 00:39:01.610 "io_timeout": 0, 00:39:01.610 "avg_latency_us": 5254.924953442014, 00:39:01.610 "min_latency_us": 4096.0, 00:39:01.610 "max_latency_us": 9175.04 00:39:01.610 } 00:39:01.610 ], 00:39:01.610 "core_count": 1 00:39:01.610 } 00:39:01.610 12:02:26 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:39:01.610 12:02:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:39:01.871 12:02:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:39:01.871 12:02:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:39:01.871 12:02:27 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:01.871 12:02:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:39:02.132 [2024-11-15 12:02:27.505089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:39:02.132 [2024-11-15 12:02:27.505574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1436b90 (107): Transport endpoint is not connected 00:39:02.132 [2024-11-15 12:02:27.506570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1436b90 (9): Bad file descriptor 00:39:02.132 [2024-11-15 12:02:27.507572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:39:02.132 [2024-11-15 12:02:27.507579] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:39:02.132 [2024-11-15 12:02:27.507585] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:39:02.132 [2024-11-15 12:02:27.507591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:39:02.132 request: 00:39:02.132 { 00:39:02.132 "name": "nvme0", 00:39:02.132 "trtype": "tcp", 00:39:02.132 "traddr": "127.0.0.1", 00:39:02.132 "adrfam": "ipv4", 00:39:02.132 "trsvcid": "4420", 00:39:02.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:02.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:39:02.132 "prchk_reftag": false, 00:39:02.132 "prchk_guard": false, 00:39:02.132 "hdgst": false, 00:39:02.132 "ddgst": false, 00:39:02.132 "psk": ":spdk-test:key1", 00:39:02.132 "allow_unrecognized_csi": false, 00:39:02.132 "method": "bdev_nvme_attach_controller", 00:39:02.132 "req_id": 1 00:39:02.132 } 00:39:02.132 Got JSON-RPC error response 00:39:02.132 response: 00:39:02.132 { 00:39:02.132 "code": -5, 00:39:02.132 "message": "Input/output error" 00:39:02.132 } 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@33 -- # sn=193295913 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 193295913 00:39:02.132 1 links removed 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@33 -- # sn=137757820 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 137757820 00:39:02.132 1 links removed 00:39:02.132 12:02:27 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1410349 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1410349 ']' 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1410349 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1410349 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1410349' 00:39:02.132 killing process with pid 1410349 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@971 -- # kill 1410349 00:39:02.132 Received shutdown signal, test time was about 1.000000 seconds 00:39:02.132 00:39:02.132 Latency(us) 00:39:02.132 [2024-11-15T11:02:27.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:02.132 [2024-11-15T11:02:27.630Z] =================================================================================================================== 00:39:02.132 [2024-11-15T11:02:27.630Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:02.132 12:02:27 keyring_linux -- common/autotest_common.sh@976 -- # wait 1410349 00:39:02.393 12:02:27 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1410013 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 1410013 ']' 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 1410013 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1410013 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1410013' 00:39:02.393 killing process with pid 1410013 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@971 -- # kill 1410013 00:39:02.393 12:02:27 keyring_linux -- common/autotest_common.sh@976 -- # wait 1410013 00:39:02.653 00:39:02.653 real 0m5.196s 00:39:02.653 user 0m9.695s 00:39:02.653 sys 0m1.413s 00:39:02.653 12:02:27 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:39:02.653 12:02:27 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:39:02.653 ************************************ 00:39:02.653 END TEST keyring_linux 00:39:02.653 ************************************ 00:39:02.653 12:02:28 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:39:02.653 12:02:28 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:39:02.653 12:02:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:39:02.653 12:02:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:39:02.653 12:02:28 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:39:02.653 12:02:28 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:39:02.653 12:02:28 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:39:02.653 12:02:28 -- common/autotest_common.sh@724 -- # xtrace_disable 00:39:02.653 12:02:28 -- common/autotest_common.sh@10 -- # set +x 00:39:02.653 12:02:28 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:39:02.653 12:02:28 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:39:02.653 12:02:28 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:39:02.653 12:02:28 -- common/autotest_common.sh@10 -- # set +x 00:39:10.802 INFO: APP EXITING 00:39:10.802 INFO: killing all VMs 00:39:10.802 INFO: killing vhost app 00:39:10.802 INFO: EXIT DONE 00:39:14.109 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:65:00.0 (144d a80a): Already using the nvme driver 00:39:14.109 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:39:14.109 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:39:18.319 Cleaning 00:39:18.319 Removing: /var/run/dpdk/spdk0/config 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:39:18.319 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:39:18.319 Removing: /var/run/dpdk/spdk0/hugepage_info 00:39:18.319 Removing: /var/run/dpdk/spdk1/config 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:39:18.319 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:39:18.319 Removing: /var/run/dpdk/spdk1/hugepage_info 00:39:18.319 Removing: /var/run/dpdk/spdk2/config 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:39:18.319 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:39:18.319 Removing: /var/run/dpdk/spdk2/hugepage_info 00:39:18.319 Removing: /var/run/dpdk/spdk3/config 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:39:18.319 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:39:18.319 Removing: /var/run/dpdk/spdk3/hugepage_info 00:39:18.319 Removing: /var/run/dpdk/spdk4/config 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:39:18.319 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:39:18.319 Removing: /var/run/dpdk/spdk4/hugepage_info 00:39:18.319 Removing: /dev/shm/bdev_svc_trace.1 00:39:18.319 Removing: /dev/shm/nvmf_trace.0 00:39:18.319 Removing: /dev/shm/spdk_tgt_trace.pid830750 00:39:18.319 Removing: /var/run/dpdk/spdk0 00:39:18.319 Removing: /var/run/dpdk/spdk1 00:39:18.319 Removing: /var/run/dpdk/spdk2 00:39:18.319 Removing: /var/run/dpdk/spdk3 00:39:18.319 Removing: /var/run/dpdk/spdk4 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1002507 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1008996 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1016116 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1023874 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1023940 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1024975 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1026098 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1027289 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1028233 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1028363 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1028571 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1028737 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1028739 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1029744 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1030750 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1031755 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1032431 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1032433 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1032763 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1034207 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1035588 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1045271 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1080061 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1085465 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1087468 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1089804 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1089824 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1090163 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1090503 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1091089 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1093260 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1094639 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1095123 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1097753 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1098451 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1099169 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1104224 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1111043 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1111044 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1111045 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1116188 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1126430 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1131237 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1138488 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1139994 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1141838 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1143357 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1148926 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1154192 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1159228 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1168988 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1169103 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1174283 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1174487 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1174633 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1175280 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1175299 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1180688 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1181507 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1186857 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1190042 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1196742 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1203290 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1213527 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1222787 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1222792 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1245851 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1246645 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1247332 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1248022 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1249081 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1249761 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1250587 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1251444 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1256502 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1256837 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1264058 00:39:18.319 Removing: /var/run/dpdk/spdk_pid1264260 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1270937 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1276607 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1288098 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1288828 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1293978 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1294331 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1299368 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1306118 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1309193 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1321521 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1332610 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1334603 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1335642 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1355539 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1360260 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1363441 00:39:18.580 Removing: /var/run/dpdk/spdk_pid1371077 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1371152 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1377656 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1379956 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1382365 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1383610 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1386073 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1387473 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1397574 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1398154 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1398684 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1401550 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1402201 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1402871 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1407699 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1407763 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1409576 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1410013 00:39:18.581 Removing: /var/run/dpdk/spdk_pid1410349 00:39:18.581 Removing: /var/run/dpdk/spdk_pid829259 00:39:18.581 Removing: /var/run/dpdk/spdk_pid830750 00:39:18.581 Removing: /var/run/dpdk/spdk_pid831597 00:39:18.581 Removing: /var/run/dpdk/spdk_pid832636 00:39:18.581 Removing: /var/run/dpdk/spdk_pid832978 00:39:18.581 Removing: /var/run/dpdk/spdk_pid834040 00:39:18.581 Removing: /var/run/dpdk/spdk_pid834262 00:39:18.581 Removing: /var/run/dpdk/spdk_pid834514 00:39:18.581 Removing: /var/run/dpdk/spdk_pid835654 00:39:18.581 Removing: /var/run/dpdk/spdk_pid836389 00:39:18.581 Removing: /var/run/dpdk/spdk_pid836743 00:39:18.581 Removing: /var/run/dpdk/spdk_pid837088 00:39:18.581 Removing: /var/run/dpdk/spdk_pid837431 00:39:18.581 Removing: /var/run/dpdk/spdk_pid837746 00:39:18.581 Removing: /var/run/dpdk/spdk_pid838085 00:39:18.581 Removing: /var/run/dpdk/spdk_pid838435 00:39:18.581 Removing: /var/run/dpdk/spdk_pid838789 00:39:18.581 Removing: /var/run/dpdk/spdk_pid839898 00:39:18.581 Removing: /var/run/dpdk/spdk_pid843367 00:39:18.581 Removing: /var/run/dpdk/spdk_pid843680 00:39:18.581 Removing: /var/run/dpdk/spdk_pid844002 00:39:18.581 Removing: /var/run/dpdk/spdk_pid844236 00:39:18.581 Removing: /var/run/dpdk/spdk_pid844615 00:39:18.581 Removing: /var/run/dpdk/spdk_pid844946 00:39:18.842 Removing: /var/run/dpdk/spdk_pid845321 00:39:18.842 Removing: /var/run/dpdk/spdk_pid845552 00:39:18.842 Removing: /var/run/dpdk/spdk_pid845835 00:39:18.842 Removing: /var/run/dpdk/spdk_pid846034 00:39:18.842 Removing: /var/run/dpdk/spdk_pid846287 00:39:18.842 Removing: /var/run/dpdk/spdk_pid846406 00:39:18.842 Removing: /var/run/dpdk/spdk_pid846877 00:39:18.842 Removing: /var/run/dpdk/spdk_pid847206 00:39:18.842 Removing: /var/run/dpdk/spdk_pid847610 00:39:18.842 Removing: /var/run/dpdk/spdk_pid852276 00:39:18.842 Removing: /var/run/dpdk/spdk_pid857520 00:39:18.842 Removing: /var/run/dpdk/spdk_pid870162 00:39:18.842 Removing: /var/run/dpdk/spdk_pid870969 00:39:18.842 Removing: /var/run/dpdk/spdk_pid876240 00:39:18.842 Removing: /var/run/dpdk/spdk_pid876593 00:39:18.842 Removing: /var/run/dpdk/spdk_pid881867 00:39:18.842 Removing: /var/run/dpdk/spdk_pid888893 00:39:18.842 Removing: /var/run/dpdk/spdk_pid892177 00:39:18.842 Removing: /var/run/dpdk/spdk_pid904718 00:39:18.842 Removing: /var/run/dpdk/spdk_pid915769 00:39:18.842 Removing: /var/run/dpdk/spdk_pid917892 00:39:18.842 Removing: /var/run/dpdk/spdk_pid918914 00:39:18.842 Removing: /var/run/dpdk/spdk_pid940452 00:39:18.842 Removing: /var/run/dpdk/spdk_pid945401 00:39:18.842 Clean 00:39:18.842 12:02:44 -- common/autotest_common.sh@1451 -- # return 0 00:39:18.842 12:02:44 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:39:18.842 12:02:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:18.842 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:39:18.842 12:02:44 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:39:18.842 12:02:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:39:18.842 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:39:19.104 12:02:44 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:19.104 12:02:44 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:39:19.104 12:02:44 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:39:19.104 12:02:44 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:39:19.104 12:02:44 -- spdk/autotest.sh@394 -- # hostname 00:39:19.104 12:02:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:39:19.104 geninfo: WARNING: invalid characters removed from testname! 00:39:45.683 12:03:10 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:47.598 12:03:13 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:50.140 12:03:15 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:51.523 12:03:16 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:53.433 12:03:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:54.815 12:03:20 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:57.360 12:03:22 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:57.360 12:03:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:57.360 12:03:22 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:57.360 12:03:22 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:57.360 12:03:22 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:57.360 12:03:22 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:57.360 + [[ -n 743835 ]] 00:39:57.360 + sudo kill 743835 00:39:57.370 [Pipeline] } 00:39:57.386 [Pipeline] // stage 00:39:57.393 [Pipeline] } 00:39:57.408 [Pipeline] // timeout 00:39:57.414 [Pipeline] } 00:39:57.430 [Pipeline] // catchError 00:39:57.436 [Pipeline] } 00:39:57.451 [Pipeline] // wrap 00:39:57.457 [Pipeline] } 00:39:57.470 [Pipeline] // catchError 00:39:57.479 [Pipeline] stage 00:39:57.482 [Pipeline] { (Epilogue) 00:39:57.496 [Pipeline] catchError 00:39:57.498 [Pipeline] { 00:39:57.509 [Pipeline] echo 00:39:57.510 Cleanup processes 00:39:57.516 [Pipeline] sh 00:39:57.804 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:57.804 1423922 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:57.826 [Pipeline] sh 00:39:58.120 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:58.120 ++ grep -v 'sudo pgrep' 00:39:58.120 ++ awk '{print $1}' 00:39:58.120 + sudo kill -9 00:39:58.120 + true 00:39:58.134 [Pipeline] sh 00:39:58.423 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:40:10.681 [Pipeline] sh 00:40:10.970 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:40:10.970 Artifacts sizes are good 00:40:10.985 [Pipeline] archiveArtifacts 00:40:10.993 Archiving artifacts 00:40:11.122 [Pipeline] sh 00:40:11.406 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:40:11.418 [Pipeline] cleanWs 00:40:11.427 [WS-CLEANUP] Deleting project workspace... 00:40:11.427 [WS-CLEANUP] Deferred wipeout is used... 00:40:11.434 [WS-CLEANUP] done 00:40:11.435 [Pipeline] } 00:40:11.445 [Pipeline] // catchError 00:40:11.454 [Pipeline] sh 00:40:11.792 + logger -p user.info -t JENKINS-CI 00:40:11.845 [Pipeline] } 00:40:11.857 [Pipeline] // stage 00:40:11.861 [Pipeline] } 00:40:11.873 [Pipeline] // node 00:40:11.877 [Pipeline] End of Pipeline 00:40:11.898 Finished: SUCCESS